Do Not Trust Dr. AI in Medical Ethics
Artificial intelligence is all the rage these days, but this article may cause some rage of its own. In this study, researchers tested a type of AI called a Large Language Model on different medical ethics cases. When tweaks were made to some of these cases, the AI defaulted to intuitive answers rather than taking into account new information. This tells us that Dr. AI is not yet ready to graduate from medical school and start their medical career just yet.
Studies like this are needed given both what is at stake, people’s lives, and the fact that much debate exists over whether AI programs can take over the role of medical providers. I, myself, am an extreme skeptic as to AI’s ability to replace what we do daily in a medical office. I do admit that it may outperform many substandard providers who are either sloppy in their knowledge or their judgement, but I just can’t see it replacing solid old-fashioned person-to-person medical care. It will always at best be a tool to augment the actual human doctors, not replace them.
To answer the question of how far such a tool can go, the researchers used what are called “lateral thinking puzzles” to test whether or not the AI program could adapt or would get stuck in prior answers. In these puzzles, tweaks are made to the original scenario, and the AI must answer the question of what to do next based on the new information. They often do not pick up on this new information, preferring to repeat previously correct choices that are now incorrect.
Intuitive but wrong answers can be made by humans of course. We all must watch out for our biases that cause us to overlook relevant data in our own decision making. Rushing through decisions, whether on a written test or in a clinical setting, can lead to mistakes. This study just shows that AI programs are not immune to their own mistakes. However, if we want to depend on them for health care, that problem needs fixing.
Helping others restore healthier, more abundant lives is still best be done by living humans, not fancy AI programs. We were designed to care for one another, and some relationship between living beings is required for the best care. If we can use AI to augment that relationship rather than replace it, we will help one another. If we try to replace the human touch with an AI program, we are going in the wrong direction.
Original Article:
Shelly Soffer, Vera Sorin, Girish N. Nadkarni, Eyal Klang. Pitfalls of large language models in medical ethics reasoning. npj Digital Medicine, 2025; 8 (1) DOI: 10.1038/s41746-025-01792-y
Thanks to Science Daily:
The Mount Sinai Hospital / Mount Sinai School of Medicine. “A simple twist fooled AI—and revealed a dangerous flaw in medical ethics.” ScienceDaily. ScienceDaily, 24 July 2025. <www.sciencedaily.com/releases/2025/07/250723045711.htm>.
Sanctuary Functional Medicine, under the direction of Dr Eric Potter, IFMCP MD, provides functional medicine services to Nashville, Middle Tennessee and beyond. We frequently treat patients from Kentucky, Alabama, Mississippi, Georgia, Ohio, Indiana, and more... offering the hope of healthier more abundant lives to those with chronic illness.