Article
Author(s):
Artificial intelligence (AI) has seen an exponential increase in the already rising interest within the medical field since the introduction of ChatGPT in November 2022.
Introduction
Artificial intelligence (AI) has seen an exponential increase in the already rising interest within the medical field since the introduction of ChatGPT in November 2022. The mobile application providing a user interface to the ChatGPT large language model acquired a record 100 million users within two months of release and currently has a million visitors per month.1,2 While ChatGPT’s release made decades of AI advances widely accessible, the medical field has been utilizing AI functionalities since the 1940s including automated analytics, data synthesis, and optimization strategies.3–5 These early efforts have gained impact in the past 15 years at the nexus of complementary advances in big data analytics, cloud computing and associated memory and storage gains, and vast quantities of curated healthcare data from electronic health records and other emerging sources.6–11
AI in Medicine: Current and Future Uses
Although the scope of AI is rapidly expanding, common uses in medicine have included risk stratification of patients based on personal or medical profiles,12–14 identification of abnormal laboratory or imaging findings and alerts to patients, physicians, and other health professionals,15,16 diagnosis of conditions based on available clinical information,17,18 and speech-to-text tools for clinical documentation.19 For example, cardiac risk stratification, sepsis alerts, and drug-drug interaction pop-up boxes are all forms of AI. Thus, artificial intelligence is already highly integrated within everyday clinical practice.
Promising future avenues for AI include new opportunities to enhance medical care, including medical education and research. AI offers potential to simulate patient scenarios, synthesize large quantities of information, and provide recommendations for diagrams, references, and other learning tools.2 Medical researchers have begun using ChatGPT and other forms of AI to analyze large quantities of data including unstructured text from articles, and even generate research hypotheses.20–23 And certainly, the uses for AI in clinical care will continue to expand as well; the exponential growth in mobile health applications, telemedicine, and personalized care is well underway.24–29 As AI moves into new areas of medicine, we can anticipate the rise of new challenges and dilemmas.
Challenges and Concerns
AI has the same potential to transform the medical field as other industries, but many questions remain regarding ethics, regulation, and medico-legal issues, among others.30,31
Ethics
Early experiences with ChatGPT have further highlighted the well-studied ‘black box’ challenge of AI, where users have difficulty understanding AI outputs and developers have difficulty explaining on what the output is based.32 In medicine, this challenge raises concerns about clinicians’ confidence in using AI outputs within their clinical judgment and their ability to explain decisions and guidance to their patients.31 Protecting patient autonomy and allowing informed consent, with transparency about the use and involvement of AI in care, should be a guiding principle.33,34
Regulation
Regulatory issues have been at the forefront of discussions about AI, with congressional representatives holding hearings on the topic and leaders of AI firms imploring legislators to seek greater regulation of their own technology. Potential bias can result from unrepresentative data used to train AI models and exacerbate existing inequities.4 One such example is positive and negative sentiments toward names associated with a particular racial group, but there have been many such issues and more can be anticipated in the future.35
Responsibility for Care
Finally, especially in medicine, the use of automated analytics and decision-making in medicine makes the responsibility-and liability-for care more ambiguous.30 For example, if a large language model like ChatGPT reviews a chart and concludes that a patient has no history of diabetes and the patient also recalls no such history, but an elevated glucose measurement is actually buried among old laboratory results, would the clinician be responsible for the discrepancy? We will need to develop standards of care for use of AI in clinical work.
Winners and Losers in an Information Revolution
In addition to the concerns and challenges noted above, we must continuously evaluate those who are benefiting and those who may be harmed by AI’s integration into medicine. As with all technological advancements, not every individual or group stands to benefit.36–38 Media outlets have begun to report accounts of writers, editors, and other professionals having their roles eliminated and replaced by ChatGPT, and restructuring can be expected within the medical profession as well.39–41 There will be pressures on physicians, nurses, and other healthcare professionals to do more with less by using AI to streamline decision-making at the expense of patient care, and clinicians must guard against any overemphasis on efficiency and revenue that does not directly benefit patients. This is particularly true because physician ownership of practices has dropped from 72% in 1988 to a mere 24% in 2022, meaning that non-clinicians and individuals with business interests have more input than ever in patient care and decisions about what tasks can-or should-be automated.42,43
We have already seen extreme examples of AI being used to undermine patient care, such as reports of major insurers using AI to automate the process of insurance denials and have physician ‘reviewers’ sign the denials without adequate time for review.44 Concerns regarding staffing ratios in health facilities and impact on patient care were being raised prior to COVID-19, and have only increased since.45–47 The expansion of AI will raise new ethical and patient safety issues that cannot all be foreseen.
Physicians and Nurses Can Ensure Patients Are Winners
In the setting of new and unpredictable shifts in health care delivery, physicians and other clinicians must champion patients’ needs respectfully but firmly. Patients cannot reasonably be expected to solve their own collective action problem and advocate for AI protections, staffing, or other needed regulations in healthcare.48,49 Nurses have done admirable work advocating for safe staffing, and physicians are well positioned to understand both the potential and the pitfalls of AI’s integration into medicine. Clinicians and public health advocates need to be at the table, both to address the workflows and wellbeing of clinicians and to advocate for patients. For clinicians to be effective patient advocates, we need to work effectively with legislators on protections for patients. Physicians and nurses faced repercussions, up to and including losing their jobs, such as in the case of Dr. Ming Lin, who was fired for advocating for patients during COVID.50 All clinicians need to be able to speak up freely about the role of AI and regulatory issues relevant to patient safety.
A Path Forward: Understand AI and Protect Patients
Physicians have guided modern medicine since the founding of the first medical schools and general hospitals in the 1700s, and we will guide our profession through this new era as well. Let us learn from those who have developed artificial intelligence. Used well, the potential is incredible, and, unlike medicine, the learning is often free and widely distributed. Those willing to do the work can have the knowledge, and those with the knowledge will have the most compelling voices for patient advocacy in the new era for AI in medicine. Those pioneering recent advances in artificial intelligence will need to learn from healthcare professionals as well. Our ethical principles, understanding of humanity, and commitment to patients will be a cornerstone for AI in medicine.
Dr. Diane Kuhn is an Assistant Professor of Emergency Medicine at Indiana University. Clinically, she is a nocturnist splitting time between the community and academic hospitals. Her research focus is improving the quality and value of emergency care. She has previously published on the use of automated tools in the electronic medical record, patient-centered care and patient experience ratings, and physician productivity and supervision models. She believes that physicians have a duty to advocate for patients at the societal and legislative level in addition to clinical encounters. As such, she is active in patient advocacy work and a member of the American Academy of Emergency Medicine, Physicians for Patient Protection, and the Indiana State Medical Association.
Dr. Edmond Ramly is an Assistant Professor at the University of Wisconsin-Madison in Family Medicine and Community Health and in Industrial and Systems Engineering. His work improves care quality and workflows in outpatient settings through Implementation Science and Human Factors. His research drives care that is both evidence-based and human-centered by balancing standardization and adaptation to local context. Contributions include the design, implementation, and scale up of systems interventions for chronic disease prevention and management as well as statewide data-driven quality improvement. He is currently developing novel methods to streamline how evidence-based care is implemented and studying the consequences of telehealth expansion for health disparities. Dr. Ramly has served as program chair and chair of the Macroergonomics Technical Group of the Human Factors and Ergonomics Society and co-authored the AHRQ/NSF federal report on industrial and systems engineering in health care.
References
1.ChatGPT Statistics 2023 Revealed: Insights & Trends. Published February 22, 2023. Accessed June 2, 2023. https://blog.gitnux.com/chat-gpt-statistics/
2.Eysenbach G. The Role of ChatGPT, Generative Language Models, and Artificial Intelligence in Medical Education: A Conversation With ChatGPT and a Call for Papers. JMIR Med Educ. 2023;9:e46885. doi:10.2196/46885
3.Yu KH, Beam AL, Kohane IS. Artificial intelligence in healthcare. Nat Biomed Eng. 2018;2(10):719-731. doi:10.1038/s41551-018-0305-z
4.Haug CJ, Drazen JM. Artificial Intelligence and Machine Learning in Clinical Medicine, 2023. Drazen JM, Kohane IS, Leong TY, eds. N Engl J Med. 2023;388(13):1201-1208. doi:10.1056/NEJMra2302038
5.Turing, AM. Computing Machinery and Intelligence. Mind. 1950;49:433-460.
6.Palanisamy V, Thirunavukarasu R. Implications of big data analytics in developing healthcare frameworks – A review. Journal of King Saud University - Computer and Information Sciences. 2019;31(4):415-425. doi:10.1016/j.jksuci.2017.12.007
7.Ahmed Z, Mohamed K, Zeeshan S, Dong X. Artificial intelligence with multi-functional machine learning platform development for better healthcare and precision medicine. Database. 2020;2020:baaa010. doi:10.1093/database/baaa010
8.Kamel Boulos MN, Peng G, VoPham T. An overview of GeoAI applications in health and healthcare. Int J Health Geogr. 2019;18(1):7, s12942-019-0171-0172. doi:10.1186/s12942-019-0171-2
9.Wu, Taiyang, Wu, Fan, Qiu, Chunkai, Redoute, Jean-Michel, Yuce, Mehmet Rasit. A Rigid-Flex Wearable Health Monitoring Sensor Patch for IoT-Connected Healthcare Applications. IEEE Internet of Things Journal. 2020;7(8):6932-6945.
10.Dang LM, Piran MdJ, Han D, Min K, Moon H. A Survey on Internet of Things and Cloud Computing for Healthcare. Electronics. 2019;8(7):768. doi:10.3390/electronics8070768
11.Aceto G, Persico V, Pescapé A. Industry 4.0 and Health: Internet of Things, Big Data, and Cloud Computing for Healthcare 4.0. Journal of Industrial Information Integration. 2020;18:100129. doi:10.1016/j.jii.2020.100129
12.Lin A, Kolossváry M, Motwani M, et al. Artificial Intelligence in Cardiovascular Imaging for Risk Stratification in Coronary Artery Disease. Radiology: Cardiothoracic Imaging. 2021;3(1):e200512. doi:10.1148/ryct.2021200512
13.Thomas J, Haertling T. AIBx, Artificial Intelligence Model to Risk Stratify Thyroid Nodules. Thyroid. 2020;30(6):878-884. doi:10.1089/thy.2019.0752
14.Suri JS, Paul S, Maindarkar MA, et al. Cardiovascular/Stroke Risk Stratification in Parkinson’s Disease Patients Using Atherosclerosis Pathway and Artificial Intelligence Paradigm: A Systematic Review. Metabolites. 2022;12(4):312. doi:10.3390/metabo12040312
15.Wu JT, Wong KCL, Gur Y, et al. Comparison of Chest Radiograph Interpretations by Artificial Intelligence Algorithm vs Radiology Residents. JAMA Netw Open. 2020;3(10):e2022779. doi:10.1001/jamanetworkopen.2020.22779
16.Pianykh OS, Langs G, Dewey M, et al. Continuous Learning AI in Radiology: Implementation Principles and Early Applications. Radiology. 2020;297(1):6-14. doi:10.1148/radiol.2020200038
17.Natarajan S, Jain A, Krishnan R, Rogye A, Sivaprasad S. Diagnostic Accuracy of Community-Based Diabetic Retinopathy Screening With an Offline Artificial Intelligence System on a Smartphone. JAMA Ophthalmol. 2019;137(10):1182. doi:10.1001/jamaophthalmol.2019.2923
18.Zhang S, Wang Y, Zheng Q, Li J, Huang J, Long X. Artificial intelligence in melanoma: A systematic review. Journal of Cosmetic Dermatology. 2022;21(11):5993-6004. doi:10.1111/jocd.15323
19.Krausman A, Kelley T, McGhee S, Schaefer KE, Fitzhugh S. Using Dragon for Speech-to-Text Transcription in Support of Human-Autonomy Teaming Research.
20.Dahmen J, Kayaalp ME, Ollivier M, et al. Artificial intelligence bot ChatGPT in medical research: the potential game changer as a double-edged sword. Knee Surg Sports Traumatol Arthrosc. 2023;31(4):1187-1189. doi:10.1007/s00167-023-07355-6
21.Panch T, Szolovits P, Atun R. Artificial intelligence, machine learning and health systems. J Glob Health. 8(2):020303. doi:10.7189/jogh.08.020303
22.Punia SK, Kumar M, Stephan T, Deverajan GG, Patan R. Performance Analysis of Machine Learning Algorithms for Big Data Classification: ML and AI-Based Algorithms for Big Data Analysis. International Journal of E-Health and Medical Communications (IJEHMC). 2021;12(4):60-75. doi:10.4018/IJEHMC.20210701.oa4
23.Hou R, Kong Y, Cai B, Liu H. Unstructured big data analysis algorithm and simulation of Internet of Things based on machine learning. Neural Computing & Applications. 2020;32(10):5399-5407. doi:10.1007/s00521-019-04682-z
24.Aldekhyyel RN, Almulhem JA, Binkheder S. Usability of Telemedicine Mobile Applications during COVID-19 in Saudi Arabia: A Heuristic Evaluation of Patient User Interfaces. Healthcare. 2021;9(11):1574. doi:10.3390/healthcare9111574
25.Weinstein RS, Lopez AM, Joseph BA, et al. Telemedicine, Telehealth, and Mobile Health Applications That Work: Opportunities and Barriers. The American Journal of Medicine. 2014;127(3):183-187. doi:10.1016/j.amjmed.2013.09.032
26.Baldwin JL, Singh H, Sittig DF, Giardina TD. Patient portals and health apps: Pitfalls, promises, and what one might learn from the other. Healthcare. 2017;5(3):81-85. doi:10.1016/j.hjdsi.2016.08.004
27.Ventola CL. Mobile Devices and Apps for Health Care Professionals: Uses and Benefits. P T. 2014;39(5):356-364.
28.Haffey F, Brady RRW, Maxwell S. Smartphone apps to support hospital prescribing and pharmacology education: a review of current provision. British Journal of Clinical Pharmacology. 2014;77(1):31-38. doi:10.1111/bcp.12112
29.Kao CK, Liebovitz DM. Consumer Mobile Health Apps: Current State, Barriers, and Future Directions. PM&R. 2017;9(5S):S106-S115. doi:10.1016/j.pmrj.2017.02.018
30.Zhang J, Zhang Z ming. Ethics and governance of trustworthy medical artificial intelligence. BMC Med Inform Decis Mak. 2023;23(1):7. doi:10.1186/s12911-023-02103-9
31.Nolan P. Artificial intelligence in medicine – is too much transparency a good thing? Medico-Legal Journal. Published online January 19, 2023. doi:10.1177/00258172221141243
32.Adadi A, Berrada M. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access. 2018;6:52138-52160. doi:10.1109/ACCESS.2018.2870052
33.Keskinbora KH. Medical ethics considerations on artificial intelligence. Journal of Clinical Neuroscience. 2019;64:277-282. doi:10.1016/j.jocn.2019.03.001
34.Rigby MJ. Ethical Dimensions of Using Artificial Intelligence in Health Care. AMA Journal of Ethics. 2019;21(2):121-124. doi:10.1001/amajethics.2019.121
35.Raub M. Bots, Bias and Big Data: Artificial Intelligence, Algorithmic Bias and Disparate Impact Liability in Hiring Practices. ARKANSAS LAW REVIEW. 71.
36.Iqbal Z. Outsourcing: A Review of Trends, Winners & Losers and Future Directions. 4(8).
37.Blattman C, Hwang J, Williamson JG. Winners and losers in the commodity lottery: The impact of terms of trade growth and volatility in the Periphery 1870–1939. Journal of Development Economics. 2007;82(1):156-179. doi:10.1016/j.jdeveco.2005.09.003
38.Wolnicki M, Piasecki R. The New Luddite Scare: The Impact of Artificial Intelligence on Labor, Capital and Business Competition between US and China. Journal of Intercultural Management. 2019;11(2):5-20. doi:10.2478/joim-2019-0007
39.ChatGPT took their jobs. Now they’re dog walkers and HVAC techs. - The Washington Post. Accessed June 5, 2023. https://www.washingtonpost.com/technology/2023/06/02/ai-taking-jobs/
40.A.I. firms are trying to replace voice actors, and they’re getting help from voice actors to do it. Accessed June 5, 2023. https://www.yahoo.com/now/firms-trying-replace-voice-actors-120000624.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAMBeBNCMlhaRLRULYhUCL9J1uYXQJvjxosxZh-R3R6MZQ3eQPbtOBNm6WPjSDHKsAbvMvSiONH8SSR8FU2F0M77NnQYjpJq_Jmm2ziHfK04M_pymb1sEqNCx4ePlIk4FBc8Rbjt1W5hC7sUC10vXu0Ok2mI9ESRRuDAibZIvx98m
41.Shah NR. Health Care in 2030: Will Artificial Intelligence Replace Physicians? Ann Intern Med. 2019;170(6):407-408. doi:10.7326/M19-0344
42.Employed physicians now exceed those who own their practices. American Medical Association. Published May 10, 2019. Accessed June 3, 2023. https://www.ama-assn.org/about/research/employed-physicians-now-exceed-those-who-own-their-practices
43.A Growing Number of Physicians are Employed | Merritt Hawkins. Accessed June 5, 2023. https://www.merritthawkins.com/news-and-insights/blog/healthcare-news-and-trends/increase-of-employed-physicians/
44.How algorithms are being used to deny health insurance claims in bulk. PBS NewsHour. Published April 2, 2023. Accessed June 3, 2023. https://www.pbs.org/newshour/show/how-algorithms-are-being-used-to-deny-health-insurance-claims-in-bulk
45.Michtalik HJ, Yeh HC, Pronovost PJ, Brotman DJ. Impact of Attending Physician Workload on Patient Care: A Survey of Hospitalists. JAMA Internal Medicine. 2013;173(5):375-377. doi:10.1001/jamainternmed.2013.1864
46.Duffield C, Diers D, O’Brien-Pallas L, et al. Nursing staffing, nursing workload, the work environment and patient outcomes. Applied Nursing Research. 2011;24(4):244-255. doi:10.1016/j.apnr.2009.12.004
47.Gorges RJ, Konetzka RT. Staffing Levels and COVID-19 Cases and Outbreaks in U.S. Nursing Homes. Journal of the American Geriatrics Society. 2020;68(11):2462-2466. doi:10.1111/jgs.16787
48.Olson M. The Logic of Collective Action: Public Goods and the Theory of Groups, With a New Preface and Appendix. Revised edition. Harvard University Press; 1971.
49.Beaglehole R, Bonita R, Horton R, Adams O, McKee M. Public health in the new era: improving health through collective action. The Lancet. 2004;363(9426):2084-2086. doi:10.1016/S0140-6736(04)16461-1
50.Stone W. An ER Doctor Lost His Job After Criticizing His Hospital On COVID-19. Now He’s Suing. NPR. https://www.npr.org/sections/health-shots/2020/05/29/865042307/an-er-doctor-lost-his-job-after-criticizing-his-hospital-on-covid-19-now-hes-sui. Published May 29, 2020. Accessed June 3, 2023.
51.Rangachari P, L. Woods J. Preserving Organizational Resilience, Patient Safety, and Staff Retention during COVID-19 Requires a Holistic Consideration of the Psychological Safety of Healthcare Workers. International Journal of Environmental Research and Public Health. 2020;17(12):4267. doi:10.3390/ijerph17124267