Practice of Medicine


Unlocking Potential, Addressing Risks of AI in Medical Practice

By: Nevarda Smith, Chief Technology Officer, MagMutual

“AI has allowed me, as a physician, to be 100% present for my patients.”

— Michelle Thompson, DO, family medicine specialist, on a smartphone-based generative AI tool that records, summarizes and organizes interactions with patients (The New York Times)

“Fix the system, but not by permanently invading my privacy and altering my relationships with my doctors. No, you may NOT record me!”

— Anonymous reader reacting to the NYT article cited above

Artificial intelligence (AI) can be one of the most powerful tools in the physician’s toolbox. However, as with most advanced technologies, there are distinct advantages and disadvantages to using it in healthcare. While AI holds great potential, its deployment in healthcare is still new, so it's important to consider both the benefits and risks that come with it — including how AI might impact the physician-patient relationship and doctors themselves.

Benefits: Improved Outcomes, Reduced Costs

Improved diagnosis and patient outcomes. Since AI can process and analyze enormous amounts of data, it can help to identify patterns and make predictions that can lead to earlier and more accurate diagnosis of diseases. “One of AI’s most promising roles is in clinical decision support at the point of patient care. AI algorithms analyze a vast amount of patient data to assist medical professionals in making more informed decisions about care — outperforming traditional tools like the Modified Early Warning Score (MEWS),” according to a recent American Hospital Association article.

In addition, by making documentation faster and easier, AI can free up time for healthcare providers so they can focus more on patient care. “As a physician with a degree in health informatics, I’m always looking for technology solutions to help make workflows better and to battle burnout,” Connecticut internist/pediatrician Vasanth Kainkaryam, M.D., said in a recent Medical Economics article. “[AI] has changed the entire way I work with patients. I’m able to focus on having conversations and thinking aloud with patients instead of remembering what I have to write down.”

Proactive, personalized medicine. By analyzing a patient's unique genetics and health history, AI can help develop personalized treatment plans that can lead to better patient outcomes. With AI's predictive capabilities, healthcare can shift from a reactive model to a more proactive and preventive one, potentially leading to healthier populations.

Increased healthcare access. According to one recent report, roughly 121 million people — 36% of the U.S. population — live in areas with inadequate healthcare facilities and services. Remote patient monitoring allows healthcare providers to track patients' health in real time, identify potential issues early and intervene when necessary, all without the need for frequent in-person visits, thereby increasing access to healthcare in underserved areas.

Reduced administrative and healthcare costs. By increasing efficiency, reducing the risk of misdiagnosis and enabling preventive care, AI has the potential to reduce overall healthcare costs. For example, by automating data entry, appointment scheduling and similar functions, AI can reduce administrative costs in the medical office, which account for 34% of total healthcare costs, according to an MIT article.

Risks: Privacy, Accuracy, Accountability

Data privacy and security concerns. The spotlight on data privacy has intensified in recent years and well over 100 nations have passed laws devoted to protecting consumers and their data. AI in healthcare involves the collection and processing of massive amounts of sensitive health data, so its use raises significant concerns about data privacy and the potential for increased harm to patients in the event of a data breach.

Bias and inaccuracy. If the data used to develop AI systems is incomplete or flawed, it could lead to inaccurate diagnoses or treatments. As a paper in the National Institutes of Health library reminds us, health data used to train algorithms is often collected from a mostly white population, so models may be biased against Black, indigenous and people of color (BIPOC). In addition, new research suggests that people may absorb biases from AI, perhaps further skewing diagnosis and care.

While AI models have demonstrated the ability to perform well on medical exams, it's essential to note that they do not possess the comprehensive understanding or clinicial experience of a human physician. Errors or inaccuracies within the modeling process — called “hallucinations” — can and do occur. This happens when AI systems generate an output that is not based on real or accurate information; they’re typically caused by training AI models on biased or incomplete data. The resulting product is false or misleading. Within generative AI models, the risk of hallucination is significantly higher, as entire outputs could be fabricated from non-existent data. This is a significant concern for healthcare — where accuracy and reliability are mission critical.

Technology failures. Like any technology, AI systems can fail or malfunction, which could have serious consequences in a healthcare context. With too much reliance on AI, a technological failure could lead to harmful patient outcomes.

Ethical and accountability issues. AI systems can't make ethical judgments or consider the moral and social aspects that often play a role in healthcare decisions, leading to potential ethical dilemmas. In case of an error or harm, determining accountability can be complicated.

Equity concerns. While AI could increase access to healthcare, costs associated with its use in personal medical devices also could do the opposite, potentially widening health disparities. “The expense involved in procuring a wearable device could keep some patients with lower incomes from participating in incentive programs, so they lose out on a potential financial reward as well as the potential health benefits” remote monitoring offers, one AMA article asserts.

Regulatory challenges. The rapidly evolving nature of AI technologies poses a challenge for regulatory bodies interested in ensuring their safety, effectiveness and ethical use. In December, the European Union reached agreement on the world’s first artificial intelligence act. A few other nations have drafted legislation or clarified current data, privacy and copyright protections. However, according to The Washington Post, few nations have passed laws targeting AI regulation at this time.

Marriage of the Physician and AI

Despite the tremendous potential of AI, its adoption has also raised concerns about the future role of physicians. Will AI supplant doctors? Not likely.

As a recent Harvard Medical School article said, “There is speculation about AI eventually replacing physicians, particularly in fields like radiology, pathology and dermatology, where AI's diagnostic ability can match or even exceed that of clinicians. However, research suggests that physician-machine collaborations will outperform either one alone.” The AMA uses the term “augmented intelligence” rather than “artificial intelligence” to reflect its perspective that artificial intelligence tools and services support, rather than replace, human decision making.

But AI will drastically change how doctors practice medicine in the future, hopefully for the better.

Because AI can automate routine and repetitive tasks such as administrative work, data entry and basic diagnosis, doctors will be free to concentrate on more complex and high-level tasks, such as patient care, critical decision making and research. The availability of AI also may drive more care to the advanced practice provider, whose ranks already are growing significantly, particularly in family medicine and pediatrics. This shift will allow all healthcare professionals to focus on areas where their expertise is most needed, ultimately improving patient outcomes and the efficiency of healthcare delivery.

The integration of AI in healthcare doesn't diminish the importance of physicians' clinical judgment. Yes, it’s a powerful tool, but AI lacks the empathy, nuanced decision making and ethical discretion inherent in humans. This necessitates a model of care where AI and physicians are partners — leveraging the best of both worlds.

As chief technology officer of MagMutual, Nevarda Smith has spearheaded multiple successful AI projects.

Disclaimer: The information in this article concerning artificial intelligence and its use in healthcare is intended for general informational purposes only. While efforts are made to ensure the accuracy and reliability of the information presented, MagMutual cannot guarantee its completeness, suitability or validity for any particular purpose. Users are advised to verify information about the use of artificial intelligence in healthcare with other credible sources and to exercise their own judgment when applying it to specific situations.


Want to learn more?

Interested in how MagMutual can help?

View our products


The information provided in this resource does not constitute legal, medical or any other professional advice, nor does it establish a standard of care. This resource has been created as an aid to you in your practice. The ultimate decision on how to use the information provided rests solely with you, the PolicyOwner.