Skip to main
University-wide Navigation

Summary

For over a year, the public availability of generative artificial intelligence (AI) tools has elicited a great deal of research, experimentation, and speculation as to how this new technology can enhance and transform all aspects of human endeavor from the future of work and education broadly to specific settings such as clinical care. At the same time, generative AI presents many areas of caution when it comes to data security and privacy, the effectiveness and accuracy of the tools, bias or inequity in how the technology impacts different groups of people, and the degree to which human agency and accountability is retained when generative AI is deployed.

Clinical care settings in particular demand careful attention to these issues.1 As Brainard, Tanden, and Prabhakar note in a White House briefing room release, “[w]ithout appropriate testing, risk mitigations, and human oversight, AI-enabled tools used for clinical decisions can make errors that are costly at best—and dangerous at worst.”2 The American Medical Association further notes the importance of informed guidance and policy on the use of AI in clinical care settings given the “lagging effort” around comprehensive oversight of novel AI technologies in areas not already explicitly regulated—particularly, in “clinical applications, such as some clinical decision support functions.”3

Machine learning (ML) and AI have been used in healthcare settings in many ways.4 These guidelines and recommendations specifically address generative AI as a particular type of AI technology and a growing set of tools with increasingly multimodal capabilities.5 The AMA defines generative AI as “a type of AI that can recognize, summarize, translate, predict, and generate text and other content based on knowledge gained from large datasets.”6 This ‘other content’ has increasingly involved images, audio, video, and non-linguistic data often in combination with written or spoken language. Since the release of OpenAI’s ChatGPT in late 2022 the number of stand-alone apps and interfaces has proliferated, including both proprietary and open-source tools. In addition to these, generative AI is increasingly embedded within other software including Microsoft, Google, Adobe, and Epic.7

In June 2023, the University of Kentucky empaneled UK ADVANCE, a broad-based committee of experts to examine generative AI and make recommendations to the UK campus and community regarding the implications of this rapidly evolving technology for higher education, research, clinical care, and beyond. UK ADVANCE is taking an evidence-based approach with experts from many disciplines and continues to monitor experiences among our local campus and community as well as national and global developments. For these guidelines, UK ADVANCE has sought input from multiple stakeholders.

After reviewing emerging evidence, experiences, and policies related to generative AI in clinical care, UK ADVANCE offers the following guidelines and recommendations concerning the use of all generative AI technologies and tools, including large language models (LLMs) as well as other modes such as images, audio, video, and non-linguistic data. Guidelines for the use of generative AI in both instruction and research are currently available on the UK ADVANCE website.

Generative AI is a rapidly evolving technology. These guidelines reflect our best understanding at the current time and may be updated to reflect the nature of the field as it continues to change.

 

 

1 Meskó and Topol 2023
2 Brainard, Tanden, and Prabhakar 2023
3 AMA 2023
4 ANA Center for Ethics and Human Rights 2022; Clipper, et al. 2018; Jiang, et al. 2017; Kaul, Enslin, and Gross 2020; Kavasidis, et al. 2023
5 Topol 2023
6 AMA 2023. Ning, et al. (2023) write that “the capability of generative AI to generate realistic content differentiates it from other general AI technology.” The critical difference between generative AI and other kinds of AI, adds Zewe (2023), is that generative AI “is trained to create new data, rather than making a prediction about a specific dataset.”
7 Diaz 2023. Because generative AI will increasingly be integrated into other digital tools, a critical component of AI literacy is “being capable of recognizing when it is being used” in the first place (Watkins 2024).