Page content
Introduction
This guidance provides specific advice on the use of GenAI tools, for postgraduate researchers, supervisors and examiners and should be read in conjunction with the general Ulster guidance on the appropriate use of GenAI tools accessible at University advice about using AI. Guidance rather than a Policy allows the University to retain an open, agile, and pro-active educational stance to Generative AI that remains:
- responsive to fast-moving changes in the field and
- ready to re-evaluate the approach in relation to specific contexts, intentions, and outcomes of use.
Broadly, using any GenAI tool in an unethical way, or to generate content without proper attribution of the original source, is considered to be improper use and will be treated as misconduct (managed via the existing University policy and process). The challenge we all have is identifying what is considered as unethical use of GenAI. This guidance attempts to provide examples of typical Postgraduate Research workflows and how AI tools may be used ethically within these processes.
The role of GenAI tools should be investigated and discussed with supervisors at each stage of the process to identify potential uses and the risks involved, ensuring a transparent and agreed approach that refers to Ulster’s advice and relevant/subject specific ethical guidelines.
This guidance has been developed following a review of other UK institutions’ guidance (largely UCL, Leeds University, Edinburgh University, University of York, University of Westminster, Bristol University, Open University, UKROI (Research Integrity Office) and will continue to evolve as the area moves forward.
Key Principles
The use of GenAI during a doctoral degree should be appropriateto the field of study, stage of research and task undertaken. Use of GenAI should be jointly agreed between postgraduate researchers and supervisors and should be fully referenced and/or acknowledged.
Given the fast-moving pace of the development of GenAI apps and the fact that many are free of charge, readily available and difficult to regulate, it will become increasing important for those involved in doctoral degrees (researchers, supervisors, examiners and institutions) to take joint action and responsibility for ensuring the integrity of the doctoral degree as an original, rigorous and significant contribution to knowledge that assumes human authorship in its intent, even when such authorship is aided and enhanced by GenAI technologies.
What is GenAI?
GenAI is an Artificial Intelligence technology that autonomously generates content in response to written prompts. Some can also respond to visual or audio prompts. The generated content includes texts, software code, images, videos and music. GenAI is usually trained using data from webpages, social media conversations, survey data, private datasets and other online content. It generates its outputs by identifying and repeating common patterns (for example, which words typically follow which words, or which pixels of images should come after which pixels). This is usually done by ingesting large datasets and statistically analysing the distribution of words or pixels or other elements in the data.
Large Language Models (LLMs) which make up common GenAI tools like ChatGPT or Microsoft Copilot can synthesise huge quantities of content from the internet, at pace. This can mislead users into believing in GenAI’s innate intelligence, and its capacity to synthesise and understand the meaning of the content it generates. It detracts from the intellectual practices and skills that researchers need to display throughout the course of their PhD: original thinking, creativity, problem-solving, and higher order thinking. Moreover, generative AI reproduces bias and stereotyping, hallucinates, and makes mistakes. Because of this, researchers should carefully and thoroughly evaluate outputs generated by artificial intelligence. These matters necessitate a principled and responsible stance on the use of GenAI tools in PhD research, or the University risks diminishing the value of doctoral degrees at Ulster.
It is important to note that appropriate uses of GenAI tools will be different throughout the stages of the research process, and will vary across fields of study.
Use and Misuse of GenAI
The following section attempts to provide guidance on both acceptable use of GenAI and inappropriate use, which may lead to malpractice or misconduct. The key underlying principle is that the researcher, at all times, must be accountable for their research and outputs by maintaining intellectual insight and oversight, originality, and critical reflection, regardless of whether or not GenAI tools have been used to assist in the development of the research.
Remember!
* While GenAI is often accurate, even in complex conversations, it is not reliable and can make simple mistakes.
* GenAI can simulate intelligent conversation, but it doesn't possess true consciousness or human-like thinking.
* Though GenAI responses might seem insightful, it doesn't genuinely understand, comprehend or critically assess topics as humans do. It is just providing the words that would statistically be most likely to appear in a response to the prompt you provided. This means that while many of us will be using GenAI in creative ways in our work, we always need to verify the accuracy of its outputs.
Acceptable v Unacceptable Use of GenAI
Acceptable Use | Unacceptable Use |
---|---|
GenAI can usefully be employed to assist brainstorming, generating ideas, providing an initial outline, generating alt-text for images and figures, or as a ‘study buddy’. While not an exhaustive list, the following are further examples where GenAI might be used to assist development of your research: | The overriding principle in avoiding abuses and malpractice of GenAI tools is to ensure that research is conducted with integrity. Again, while not exhaustive, unacceptable use of GenAI which may lead to misconduct would include: |
* proofreading (proofread your own work first, use the tool as an assistant, and critically review its suggestions to ensure accuracy and alignment with your intended meaning and style) | * generating new text/content |
* to help you prepare for your viva or assessment seminars by generating mock questions | * paraphrasing work from other authors that you want to use as part of your work |
* for data analysis, pattern recognition, or generating insights (caution is needed with data analysis in terms of privacy, copyright and integrity as it requires input of research data and therefore it should be limited to trustworthy software or tools that do not use uploaded data for training purposes or automate critical analysis or deduction) | * Using GenAI to re-write text that you have written yourself (while acceptable use may include refining text and sentence restructure, it would be unacceptable to use tools for substantive changes to your original text, such as condensing or re-writing any of your sentences or sections of work). |
* to support a particular process such as testing and debugging code or translating | * translating work you have written in another language into English, which you then submit as your own writing |
* organising your references | * altering the substance of any ideas and arguments put forward within the work. |
* project planning | |
* to support your literature review process, for example, supporting the development of database search strategies, literature mapping and finding papers relevant to your topic | |
* summarising papers to help you check your understanding (n.b. the summary must not then be re-used in your thesis) | |
* analysing content (again, caution around privacy, copyright and integrity and limit use to trustworthy software or tools that do not use uploaded data for training purposes) | |
* creating artwork or any work contributing to the practice element in a PhD by Practice (images, audio and videos) | |
* organising and summarising your work notes |
How to Avoid Mis-Use of GenAI tools
Data integrity: Many GenAI tools store user inputs and interactions as training data to improve performance. Important data that should not be entered into such tools, include (but are not limited to):
- data from human subjects, any sensitive data, data accessed through a non-disclosure agreement, copyrighted data or content, lesson slides/handouts, participant names and other identifying information;
- interviews transcripts where GenAI is being used to translate and/or analyse the data as this could lead to misinterpretation and oversimplification of cultural and contextual nuances that may lead to inaccuracies;
- Unless using the institutional Microsoft Copilot account in protected mode, novel or unpublished aspects of the research which should not be in the public domain prior to publication should not be entered into a GenAI tool.
Other risks include:
- Privacy concerns around personal and private data. When using GenAI, personal data may be collected, stored and shared with other third parties.
- In some cases uploading copyright material as a prompt to a GenAI tool may constitute a breach of copyright.
- Bias, due to the data GenAI tools are trained on. There may be biases in the datasets used to train GenAI or people may have inputted their own biases when working with GenAI.
- Oversimplification and misinterpretation of facts.
- Increased risk of plagiarism, as the source of the original content generated by AI is not always known.
How to Declare Use of GenAI tools in your thesis
A key part of maintaining the integrity of your research and thesis is to appropriately acknowledge the use of GenAI, in the same way that you would acknowledge and reference sources used. As the content created in GenAI tools cannot be replicated by another person and cannot be linked to, you must reference the outputs in the same way that you would a personal communication or correspondence.
The library provides more information about citing and referencing AI in the Harvard style and the Library Subject Team can provide more guidance if other referencing styles are used.
- Citing AI Generative Tools in the Ulster Harvard Referencing style for LHS
- Citing AI Generative Tools in the Ulster Harvard Referencing style for CEBE, AHSS and UUBS
As a minimum, you should include the following detail as part of your declaration of use of GenAI:
- Name and version of the GenAI tool used, e.g. ChatGPT-3.5
- URL or source of the GenAI tool
- A brief description of how the tool was used
- Date the content/output was generated
- Confirmation that use has been discussed with the supervisory team
Type | Format | Example |
---|---|---|
In-text Citation (Harvard) | (Corporate Author, Year) | (OpenAI, 2023) |
Reference as per a website: | Author (Year) Title. Source [online]. Available from: website [Accessed date]. | OpenAI (2023) ChatGPT [online]. Available from: https://chat.openai.com/ [Accessed 18 September 2023]. |
Protecting data and content while using AI tools
It is very important to be careful about the information you provide to GenAI tools. Researchers should not upload University intellectual property (your research will have been assigned to the University upon registration so will count as University IP) to any GenAI tool, other than the University’s Microsoft tools such as Copilot and Teams. Microsoft Copilot is available to all researchers free of charge through the University Microsoft 365 licence. When using Copilot, logging in with your University credentials provides you with commercial data protection, meaning your chat history is not saved, and the data is not used to train AIs, ensuring that University inputs remain secure.
Do not enter any personal or sensitive information about yourself or others into a Generative AI tool. Doing so risks the data being shared online or being used to train the AI and would be a breach of GDPR to share someone else’s information.
Examples of other GenAI tools
The following provides links to other Large Language Models (LLMs) that you may wish to use. Please note that Ulster University does not endorse any of the tools in these lists. You should not use any of them unless you can ensure that doing so complies fully with all relevant University policies, including data protection and intellectual property, as well as relevant legislation eg GDPR and The Copyright, Design and Patents Act.
Text AI tools
Examples of text GenAI tools
- ChatGPT from OpenAI
- Claude from Anthropic
- Gemini from Google (formerly Bard)
- Llama2 from Meta
- Pi from Inflection
- Hugging Chat (from HuggingFace)
GenAI tools built on top of GenAI tools
- ChatPDF (summarises and answers questions about submitted PDF documents)
- Elicit (aims to automate parts of researchers’ workflows, identifying relevant papers and summarising key information)
- WebChatGPT (Google Chrome extension that gives ChatGPT Internet access, to enable more accurate and up-to-date conversations)
Multimedia GenAI
- DALL*E (OpenAI’s image tool)
- DreamStudio (Stable Diffusion’s image tool)
- Midjourney (image tool)
- Runway (video tool)
- Boomy (music tool)
- Voicemod (music tool).
Proofreading GenAI Tools
(proofreading tools need to be used with caution as some can rewrite content. Researchers should ensure that any tools used do not infringe copyright/IP and privacy when uploading data).
- QuillBot: An online AI proofreader that highlights errors and suggests revisions.
- Scribbr: An AI proofreader that fixes grammatical errors, such as sentence fragments, run-on sentences, and subject-verb agreement errors.
- Grammarly: A popular AI tool that offers grammar, spelling, and punctuation checks, as well as writing style analysis.
- Wordvice AI: An AI proofreader for academic and professional writing.
- Acrobat Assistant: An AI-powered tool to help refine and polish PDF content.
Artificial Intelligence tools in transcription and qualitative data analyses
Informed by the School of Psychology's Guidance
There is increasing accessibility of generative Artificial Intelligence (AI) tools for automatic transcription and qualitative data analysis. As these tools are openly accessible and generative their use poses ethical concerns that must be addressed.
Placing recordings of interviews, focus groups or naturalistic interactions into a generative AI tool for transcription or analysis will mean that the data contributes to the data training set of the tool. These data are therefore no longer stored securely.
At Ulster University we have clear policies on data storage and follow GDPR legislation. We also have data management processes that encourage data sharing that follow FAIR principles. These processes are in place to ensure confidentiality of participants, whilst also encouraging openness and transparency in research processes. Simply uploading non-anonymised recordings into open generative AI tools does not comply with our standards.
Researchers should think carefully about the consenting process around data sharing and the implications for analysis and data sharing processes.
Open generative AI tools should not be used for transcription services. There are some Ulster specific tools, within the Microsoft ecosystem, which are appropriate for transcription in certain circumstances:
- The use of auto transcribe service in MS Teams - by pressing “Record and transcribe” at the beginning of a meeting.
- The use of Transcribe in Microsoft Word for web. 1.
Once recordings have been fully transcribed, they should be deleted.
Open generative AI tools may be used for data analysis on fully anonymised data when participants have consented for their data to be shared and analysed using GenAI tools.
Postgraduate Research Applications
The information provided, during the application process, should be complete and accurate, and should not contain false or misleading information.
If information is copied from elsewhere or someone else has written the application, this could be considered fraud. This includes the use of GenAI tools such as ChatGPT.
Applications may be rejected if there is a belief that information is false or misleading, including being generated by AI tools.
AI tools can however be used in ethical and appropriate ways, during the application process, to generate ideas and inspiration or to help with the development of a research proposal or personal statement. The use of any GenAI tools should be clearly acknowledged. Research applicants should be advised to read Ulster’s position in the use of AI tools before applying, particularly the Governance and Ethics section of the website.
Generative AI – A Quick Checklist
The following checklist (adapted from the University of Leeds) may be useful
- I have spoken to my supervisors about how I would like to use GenAI tools in my research process and this has been documented in the supervision meeting records in PhD Manager.
- I have considered how the GenAI tool will use the data I input (including checking the terms and conditions and privacy policy) and have chosen an appropriate GenAI tool for my task.
- I have understood the risks and limitations of using GenAI, including a recognition of issues of bias, sensitivity, accuracy, appropriate content and ethical issues.
- I have checked and critically reviewed any quotations, citations, or outputs that the GenAI tool has generated.
- I have not submitted any Personally Identifiable Information (PII) to a GenAI tool. (Personally Identifiable Information (PII) refers to data that can be used to identify, locate, or contact an individual, either directly or indirectly. It's essentially information that can be tied back to a specific person. Examples include names, addresses, email addresses, phone numbers, and even IP addresses.)
- I have checked that sharing of content with GenAI tools is consistent with guidelines for the handling of material in any contractual agreements with individual sponsors (if relevant) and accrediting bodies.
- I have considered whether my use of GenAI tools conforms to Ulster University’s ethics review regulations, including where necessary by engaging with the ethics review processes and I have received a positive response to my ethical review application.
- I have checked that sharing of content with GenAI tools is consistent with my ethics review approval (if relevant).
- I have ensured that no part of the work I am submitting paraphrases GenAI outputs without acknowledgement.
- I have ensured that my research and the work that I submit remains my own work.
- I have appropriately referenced the use of GenAI tools in my work.
- I have saved copies of GenAI outputs and inputs used during the research process and in preparing work for submission (copies may be requested in cases of misconduct)
- I have liaised with my supervisors to agree on any use GenAI.