News Team Member Laniah Bowdery explains the negative impact of short-form content on the cognitive skills of adolescents.
Can AI Make Decisions For Patients?
AI can inform patient care but without proper regulations and education, doctors might become liable for its errors.
By Yiying Zhang
In August 2025, a 60-year-old man walked into an emergency room terrified. For weeks, he hadn’t been sleeping and was seeing things that weren’t there. By the time he arrived at the hospital, he was hallucinating and deeply paranoid. To doctors, the situation looked like a psychiatric emergency.
But the source of this unraveling wasn’t psychological. It wasn’t drugs. It wasn’t even anything anyone had done to him. It was sitting quietly in his kitchen cabinet. Physicians published his case in the Annals of Internal Medicine later in 2025, as a warning of what can happen when people treat chatbots as medical authorities.
Earlier that month, he had decided to “eat healthier”. Like millions of people do now, he turned to an AI chatbot for quick advice. From its response, he mistakenly believed that sodium bromide could be used like table salt. He began sprinkling it into his food everyday, completely unaware that bromide builds up in the body and can cause severe neurological symptoms. By the time doctors tested his blood, his bromide levels were dangerously high.
The bromide case may appear extreme, but it captures a broader reality. Artificial intelligence is no longer a distant promise in the healthcare. It now shapes decisions at multiple level of care, from how patients seek advice at home to how clinicians prioritize cases inside hospitals.
While often framed as decision support, these systems exert real influence over judgment and workflow. As AI replaces doctors for health advice, the ethical question is no longer whether it works but whether healthcare systems are prepared to take responsibility for how it is used.

AI in Healthcare
Artificial intelligence did not enter medicine with a single breakthrough. It is seeping in slowly. Hospitals first adopted AI systems as a background tool for administrative layers such as record handling, documentation, and communications. This aimed at managing workloads that had outpaced human capacity.
This shift occurred against a backdrop of widespread strain. Surveys suggest that roughly 40 percent of the U.S. healthcare workforce intends to leave the profession within next five years, with burnout cited as a primary driver. Administrative burden plays a central role. In the Medscape Burnout Report, 61 percent of clinicians identified documentation and administrative tasks as major contributor to burnout.
Therefore, AI was promoted to process more data than clinicians could manage alone. As physician researcher Eric Topol wrote in 2019 in Nature Medicine, the technology promised a “high-performance medicine” built on speed and pattern recognition.
Then COVID-19 accelerated adoption further. Hospitals leaned heavily on digital triage tools, risk-prediction models, and remote diagnostic systems. But rapid adoption exposed limits. A 2021 study in JAMA Internal Medicine found that a widely used U.S. sepsis-prediction algorithm performed far worse in real practice than in trials, raising concerns about how models behave once they left controlled environments.
As these systems moved from experiments to infrastructure, scholars began to ask not just whether they worked, but who was responsible when they failed. A 2025 article in BMC Medical Ethics examined AI-driven diagnostic systems and found recurring problems of unclear ownership of errors, limited transparency around data sources, and inconsistent documentation across vendors. Even the most advanced models operate behind layers of technical complexity that make it difficult for hospitals to evaluate their failures.
AI Can Guide Clinical Decisions, Not Make them
Today, AI is embedded in everyday clinical workflows. These systems rarely replace clinical judgement outright, but they shape how attention is allocated and how decisions begin. The gap between reliance on AI and clarity about accountability has become one of the most pressing and unsolved challenges in modern healthcare.
Much of the public discussion around medical AI assumes that the main challenge is accuracy. Build a better model, the logic goes, and it will naturally find its way into hospitals. Andrew R. Janowczyk, PhD, who develops and evaluates medical AI systems in computational digitial pathology, thinks the public story about AI in hopsitals often skips the most important part: whether these tools actually make it into clinical practice at all.
“Many of the promises that were given by these tools actually don’t show up in clinical practice today,” said Janowczyk, who is an assistant professor of biomedical engineering at Georgia Tech School of Engineering and Emory University School of Medicine. “Very few of them get translated into a place where they are able to directly affect patient care.”
Janowcyzk’s work focuses on what happens after a model is built. In a recent preprint outlining standards for deploying and accrediting clinical digital pathology tools, he and his colleagues describe the technical and institutional requirements that must be met before an algorithm can be used in patient care. Hospitals, he explains, are not research labs. They lack the computing infrastructure common in universities, operate under strict regulatory constraints, and must be able to document how tools are validated, monitored, and updated over time.

But the barriers are not only technical. They are ethical by design. In clinical settings, decisions must be justifiable, not just statistically impressive. Janowczyk emphasizes that a physician does not need to convince every colleague that a decision was correct, but only that it was reasonable given the information available at the time.
Black-box AI systems, where people cannot see what is going on in the algorithm performance, cannot meet that burden. They can generate predictions, but they cannot explain why those predictions should override established standards of care. In medicine, that distinction determines who is accountable when harm occurs.
This is where responsibility becomes central to Janowczyk’s work. If algorithm recommends a course of action that contradicts clinical guidelines, the physician who follows it bears the legal and moral risk. Algorithm developers, by contrast, typically disclaim responsibility, emphasizing that they do not practice medicine. That asymmetry makes unrestricted deployment of opaque systems untenable in healthcare. “You can’t point to a black box and say that’s why you made a medical decision,” he said. “That doesn’t hold up in a clinical or legal setting.”
In a 2025 survey by the American Medical Association, nearly half of physicians surveyed said that increased oversight was the top regulatory action needed to increase trust in adopting AI tools, underscoring that many clinicians feel systems for evaluating AI models limitations and safety remain insufficient.
AI Use Without Regulation
Even when AI is described as “decision support”, it can still change what clinicians notice, what they trust, and what they do next, especially in a fast-paced working environment where time and attention are scarce.
John Banja, an associate professor and clinical ethicist at the Center for Ethics at Emory University, says the technology is currently far more visible in documentation workflows. But even there, it is already shaping clinical thinking. “I have a friend down at Grady Hospital. He is an ICU doctor, and he says he uses ChatGPT all the time to help him out with whether getting the diagonisis right,” said Banja.
This kind of use reflects the reality of clinical work today. Clinicians are often exhausted, interrupted, and expected to move quickly. In radiology, Banja said, there is an unsafe volume of studies. Under that pressure, algorithm suggestions are hard to ignore, even when clinicians are aware that the tools are imperfect.
This is the context where automation bias begins to take hold. “Consequently, healthcare professionals are going to depend more and more on this technology,” said Banja, “And in the process, they are going to loose certain skills.” The risk is not that clinicians stop thinking, but that repeated reliance on algorithmic cues gradually reshapes judgement, especially when there is little time to independently reassess every recommendation.
Yet when something goes wrong, the responsibility does not shift with that influence. Under current legal and regulatory framework, liability remains entirely on the human decision-maker. “You can’t sue a machine”, Banja said, “You can’t sue a device.” Even as AI systems increasingly shape clinical decisions, there is no clear mechanism for holding developers, hospitals, or vendors accountable for errors that emerge from those systems.
This growing mismatch between influence and responsibility is where regulation is trying to keep up. In early 2025, the U.S. Food and Drug Administration expanded its public list of AI-enabled medical devices. This is an effort to increase transparency around what systems are being used and how they are reviewed. The agency also published a cross-center update describing how its medical-device, drug, and biologics divisions are coordinating oversight for AI and machine-learning tools.
Professional organizations are attempting to fill the gap. In July 2025, the American Medical Association released a governance framework urging health systems to designate explicit internal owners for AI oversight, including executives responsible for safety, bias monitoring, drift detection, and post-deployment review. But these recommendations are voluntary, and implementation varies widely across institutions.
If responsibility for AI-driven decisions remains firmly on clinicians, then how these tools actually function in day-to-day clinical work becomes central to the ethical question. Banja points out that one of the most overlooked ethical questions surrounding medical AI is not whether it improves efficiency, but who that efficiency ultimately serves. The ethical stakes of medical AI do not end with clinicians. They are shaped just as powerfully by how hospitals choose to deploy the capacity these tools create.
Time for Rest or More Patients
Randy Tryon, a family physician in North Carolina, described using an AI scribe system to help with clinical documentation. “It takes maybe 30 minutes, maximum an hour, to finish my notes instead of two to three hours,” Tryon said. The tool listens to patients visits and drafts note automatically, significantly reducing after-hours paperwork.
But this efficiency does not resolve the ethical tension but reshapes clinical workflows and expectations. In theory, automation is supposed to give clinicians time back and give back more space for judgement and care. In practice, that time is rarely returned. Instead, efficiency gains are often absorbed by system itself, translating into higher patient volume, tighter schedules, and rising expectations rather than relief for overextended staff.
“For a hospital administrator, if my nurses and doctors have freed up 20% to 30% of their time, they need to see more patients for profits.” said Banja. The capacity is immediately reallocated, not toward rest, reflection, or patient communication, but additional throughput.
This is where accountability becomes blurred. Clinicians remain legally and ethically responsible for the content of medical records, diagnoses, and treatment decisions. This means a physician may sign off on a chart drafted by an algorithm, but if an error emerges – whether from omission, misinterpretation, or subtle bias – the responsibility does not trace back to the tool. It stops with the human name on the chart.
Besides helping clinicians, AI changes the structure of clinical responsibility without formally redistributing it. As these systems become more embedded in everyday care, the ethical challenge is no longer whether clinicians should trust AI, but whether healthcare systems are prepared to acknowledge how much influence they have already given it, and what obligations that influence creates.
Acknowledging Limitations
From Janowczyk’s perspective, many of these tensions are the predictable result of deploying AI into clinical systems that were never designed to absorb uncertainty. In his work, he has seen how easily technical performance can be mistaken for clinical readiness, and how responsibility remains murky once a tool moves from research into practice.
“The gap is always there,” Janowczyk said, referring to the difference between how AI systems perform in development and how they behave in real clinical environments. “The danger isn’t that the gaps exist. The danger is pretending it doesn’t.”
For him, the issue is not simply whether an algorithm is accurate, but whether institutions are prepared to monitor, validate, and take responsibility for it once it begins influencing care. Medical AI will not quietly change overnight. In accredited clinical settings, model must be frozen, documented, and revalidated before any update. This reflects how seriously healthcare system treat risk, but also how slow accountability mechanisms can be.
The structure reveals a deeper mismatch. AI systems are often discussed as flexible, adoptive tools, yet clinical environment they enter demand stability, traceability, and clear ownership. When those requirements are not met, responsibility falls back onto clinicians, even when they have little visibility into how a model was built and how it is utilized.
For patients, AI’s influence is mostly invisible. They see the effects it bring. What the 2025 bromide case mentioned at the very beginning showed is that AI does not need to make a dramatic error to cause harm. It only needs to offer a suggestion that no one is prepared to assess or owns responsibility for.
But AI will not leave healthcare. Its benefits for faster imaging workflows, earlier disease detection, fewer administrative burdens are real. Hospitals need these advantages, and clinicians often welcome them.
The challenge ahead is not simply improving models or tightening regulation, but deciding what kind of medical judgment, and whose judgment, we are willing to automate. If accountability, governance, and clinical realities are aligned, AI has the potential not just to accelerate care, but to strengthen the foundations of medical decision-making itself.