Skip to main
University-wide Navigation

The public release of generative artificial intelligence (generative AI) tools represents a significant moment for the University of Kentucky’s research and scholarly enterprise. Since late 2022, the capabilities of generative AI have proliferated and advanced at a rapid pace. Machine learning and artificial intelligence are established methods for use in research, but generative AI is novel in that it produces content—text, code, image, sound, video—based on user input and dialogue and presents additional questions.

Emerging research has explored the potential for innovation and efficiency that generative AI presents. UK has foundational research strengths across many disciplines that use and benefit from AI, machine learning and deep learning. UK has seen significant growth in AI-related grants, with many more opportunities for securing expanded research funding in AI across nearly every federal agency and through philanthropic avenues. With a comprehensive AI strategy, we can focus our strengths to fully embrace and utilize AI technology to advance our mission not only to facilitate learning and expand knowledge, but also to serve our global community better by discovering, disseminating, sharing, and applying knowledge. We aim to accelerate our transdisciplinary research agendas, educate our research faculty and staff on state-of-the-art AI platforms, and address the needs of our corporate partners and citizens of the Commonwealth who rely on our university for AI-related training.

At the same time, there are documented concerns that bear significant implications for research and scholarly activity such as the accuracy or bias of generated information and issues around authorship, transparency, and intellectual property rights such as copyright and data privacy.

The University of Kentucky promotes and expects a culture of research integrity and responsible and ethical conduct of research. Research integrity depends on the reliability and trustworthiness of research. Responsible conduct of research and scholarly activity (RCR) is founded on core values such as honesty, accuracy, efficiency, and impartiality. The availability of generative AI tools has the potential of advancing and enhancing research and scholarly work when used responsibly. These recommendations are offered for all faculty, staff and trainees (visiting scientists, postdoctoral fellows, graduate students, undergraduates, etc.) who participate in research as well as scholarly or creative work.

Following an initial review of emerging U.S. federal agency rules and guidelines from professional organizations and journals, among other sources, regarding the use of generative AI tools in research, UK ADVANCE offers the following guidance in response to frequently asked questions from the UK research community.

Should generative AI be used for research?

Generative AI tools have the potential to enhance research outputs and contribute to knowledge. There are considerations in using generative AI tools for research, including the potential for AI to generate data that is inaccurate, inappropriate, not novel, or biased. Specifically, text generation tools such as ChatGPT are vulnerable to confabulations called “hallucinations” or generating new “facts” that are not true in the real world. So, although such tools may be used for organizing text, all statements made by them should be verified by the user for correctness before retaining them in their final output. Generative AI tools paraphrase from various sources, which may raise issues with plagiarism and intellectual property; these tools have also been found to reference incorrect sources or provide false references. When using a generative AI tool, it is best practice to verify or validate all generated content using additional factors and reliable resources.

Additionally, the use of generative AI in research will differ by discipline in what is considered appropriate. Check with your disciplinary authorities, organizations, funding agencies, and publications for a more context-specific understanding of how generative AI may be used in research and scholarly activity in your area.

Can generative AI be used for theses, dissertations, or comprehensive exams?

The use of generative AI in theses, dissertations (and the research underlying them), and comprehensive exams will differ by discipline in what is considered appropriate. Check with your disciplinary authorities, organizations, funding agencies, and publications for a more context-specific understanding of how generative AI may be used in ways that are referenced, disclosed, and/or cited appropriately. Additionally, make sure to consult all requirements, guidelines, and regulations for theses, dissertations, and comprehensive exams (e.g., in the department, college, and Graduate School), as well as the student’s chair or adviser.

Can generative AI be listed as an author of research work?

Generative AI cannot be designated authorship as it cannot be held accountable for issues such as research misconduct/plagiarism or intellectual property misuse. Most journals’ criteria for authorship would not qualify generative AI as an author, and the Committee on Publication Ethics (COPE) has asserted that “AI tools cannot meet the requirements for authorship as they cannot take responsibility for the submitted work. As non-legal entities, they cannot assert the presence or absence of conflicts of interest nor manage copyright and license agreements.” The International Committee of Medical Journal Editors (ICMJE) lists criteria for authorship and the UK research website outlines criteria for authorship.

Who is responsible for content generated by AI?

Generative AI cannot be designated authorship as it cannot be held accountable for issues such as research misconduct/plagiarism or intellectual property misuse. Researchers are responsible for the content and accuracy of all aspects of their work. Human authors must take responsibility for the content and the accuracy, factualness, or veracity of the data and analysis presented in the research. Generative AI enables potential use in research but “it does not excuse our judgment or accountability.”

How should generative AI use be described in reported research results?

Journals have different rules for reporting the use of generative AI in manuscripts. Generally, journals, including those by the publishing houses of Taylor and Francis and Springer have stated that input from AI must be detailed in the Materials and Methods section, Acknowledgement section, or similar section for transparency. Any publication of reported results should disclose the use of a generative AI tool, which tool, for what parts of the publication and how it was used. Other best practices include indicating the specific language model in addition to the generative AI tool, as well as the date(s) of use, e.g., “ChatGPT Plus, GPT-4, 19-20 September 2023.”

Is it permissible to use generative AI for grant writing?

Funding agencies and other sponsors expect original ideas and concepts from grant applicants. Concerns raised when using generative AI for grant writing include that AI may generate data that is inaccurate or outdated and possibly biased. AI tools paraphrase from various sources which could result in plagiarism, which would in turn constitute research misconduct or lead to intellectual property issues. AI tools have also been found to reference incorrect sources or to create false references. When using a generative AI tool, it is best practice to verify or validate the content provided using additional factors and reliable resources. As with all questions of grant writing protocol, it is recommended to check any individual grant agency’s guidelines for regulations from the agency regarding the use of generative AI in writing proposals for their programs.

Can generative AI be used in peer review of grant applications?

One federal agency, the National Institute of Health (NIH), has stated that AI cannot be used in peer review. “NIH prohibits NIH scientific peer reviewers from using natural language processors, large language models, or other generative Artificial Intelligence (AI) technologies for analyzing and formulating peer review critiques for grant applications and R&D contract proposals. NIH is revising its Security, Confidentiality, and Non-disclosure Agreements for Peer Reviewers to clarify this prohibition. Reviewers should be aware that uploading or sharing content or original concepts from an NIH grant application, contract proposal, or critique to online generative AI tools violates the NIH peer review confidentiality and integrity requirements.”

Another agency, the National Science Foundation (NSF), has announced that "NSF reviewers are prohibited from uploading any content from proposals, review information and related records to non-approved generative AI tools...A key observation for reviewers is that sharing proposal information with generative AI technology via the open internet violates the confidentiality and integrity principles of NSF's merit review process."

For information on other agencies or sponsors’ policies on generative AI use in peer review of grant applications please contact the agencies or sponsors directly.

What privacy concerns arise in using generative AI in research?

Inputting any research data into a generative AI tool renders that data available in the AI tool and its use. Accordingly, data privacy review is needed before any Protected Data (AR 10.7) is entered into a generative AI tool (whether the tool is publicly available or not) to ensure that the tool’s data privacy and security program complies with all applicable laws and university guidelines. This process can be initiated by contacting the UK Information Technology Services Governance, Risk and Compliance (GRC) team at GRC@uky.edu.

Generative AI tools that are public and available for use by anyone pose elevated risks to privacy when entering research data, in particular protected health information (PHI), personal identifying information or other personal information protected by law such as FERPA, and any proprietary information.

Unless the UKHC InfoSec Data Sharing Committee has confirmed the AI tool is HIPAA compliant and supports PHI input, do not put research data containing PHI into a generative AI tool or other software. Additionally, other non-public or proprietary research data should not be placed into an open-source AI tool without UK ITS GRC approval.

What are some considerations when using generative AI in human subject/participant research?

Data used to train language models that is not representative of or under-represents societal groups can result in biased or skewed data. This can occur when a generative AI system is created, and the raw data sets are limited and/or the algorithmic designers or programmers are biased. If they are not validated or used carefully, algorithms can result in AI-generated data that is biased against race, age, culture, gender, sexual orientation, socioeconomic status, color, or personality traits to name a few. Research data resulting from biased generative AI use can create mistrust and distorted results. It is recommended that researchers develop plans to routinely evaluate AI tools used in their research for bias.

Research data that are considered personally identifiable information (PII) should not be entered into a query in a generative AI system unless it meets privacy and data protection standards for research. Any research data entered into a generative AI tool will be integrated into that tool’s system.

What Patent and Copyright considerations arise when using a generative AI tool?

Referring to numerous statutes applicable to patents, as well as case law and regulations, the United States Patent and Trademark Office (USPTO) has determined that only natural persons can be named as inventors, which precludes generative AI from being referred to as an inventor. The sole or significant use of generative AI to produce or contribute to an invention could potentially preclude the ability to gain patent protection or be named as an inventor, as inventorship requires material intellectual contribution of a person inventor.

The US Copyright Office (USCO) has published a statement in the Federal Register regarding generative AI. “Based on the Office’s understanding of generative AI technologies currently available, users do not exercise ultimate creative control over how such systems interpret prompts and generate material.” It goes on to state that each case is dependent on the extent of creative control that the human had over the work including the traditional elements of authorship.

Is there a reliable generative AI detection tool?

There is currently not a reliable generative AI detection tool. Some publishers may be using generative AI detection tools as well as other tools that use AI to detect similarity such as Proofig and Imagetwin. We recommend reviewing publisher submission requirements on the use of generative AI and other expectations for submissions.

How can generative AI augment research and scholarly activity?

The following areas are where generative AI may augment research and scholarly activity. All examples come with the caveat that some of the use-cases could lead to copyright infringement. It all depends on what the AI’s model was trained on, such as copyrighted text or images. Since the training data for many of the generative AI models are not fully disclosed, using them could output results that closely resemble a particular owner’s work and hence the user may be liable for infringement (besides the company that built the AI model or tool).