Page content
Purpose and Scope
This guidance seeks to protect students, staff and subjects in terms of academic integrity from the point of view of using generative artificial intelligence.
It sets out principles for the responsible, ethical, and effective use of generative artificial intelligence (GenAI) tools (e.g. ChatGPT, Copilot, DALL·E, Midjourney etc.) in student coursework.
Information for students
Information about using GenAI tools should be available for students in the programme handbook, module handbook and assessment briefs.
Opportunities to discuss it with students at multiple points from induction onwards should be identified to clarify expectations and what constitutes acceptable use in the subject.
What the advice applies to
This advice applies to all taught modules and forms of assessed or submitted work across the university, including undergraduate, postgraduate, and continuing education programmes.
It aims to clarify expectations, promote academic integrity, and provide guidance for staff and therefore students on the responsible, ethical, and effective use of GenAI tools.
Academic integrity and Assessment code of practice
This document should be read in the context of the policies and procedures around Academic Integrity and the Assessment Code of Practice.
This advice can also be used to review current assessment practice and align it with learning outcomes, reduce plagiarism, and improve student success.
Principles
The university acknowledges the widespread use and educational potential of GenAI, but also recognises the risks posed to academic integrity and learning.
For GenAI, the following principles, derived from common sector practices and advice from QAA and JISC, can be used to examine their use and foster shared understanding at school level, recognising the nuances of specific subjects.
GenAI literacy is developed by critically engaging with and reflecting on the benefits and risks of GenAI tools.
Such practice equips educators with the knowledge, skills and confidence to understand where AI fits into their learning and teaching; develops the skills to integrate it effectively into practice, and boosts confidence to guide students in its responsible, ethical, and effective use.
Staff and students must maintain academic integrity, submit work that reflects their own labour, reasoning/critical thinking, creativity and learning, and comply with the Assessment Code of Practice.
Discussions about academic integrity to enhance student understanding and internalisation of their accountability are recommended from induction onwards
GenAI use must support—not replace—the development of core academic, disciplinary, and professional skills. Academic rigour underpins the integrity of higher education, demanding depth, evidence, and critical engagement.
Educators must safeguard rigorous standards—designing assessments that challenge understanding, promote originality, and uphold scholarly values to enable students to develop the intellectual discipline essential for meaningful, credible academic and professional contributions.
Critical thinking is foundational to university education, enabling students to question, analyse, and create knowledge.
As digital tools evolve, protecting and nurturing this skill is vital. Educators must foster environments that value inquiry, resist passive consumption, and empower learners to think independently, ethically, and rigorously.
Sustainable GenAI use in education requires thoughtful integration that balances innovation with equity, ethics, and environmental impact.
Educators must develop practices that minimize bias, reduce digital waste, and promote inclusive access.
By embedding sustainability into AI adoption, we ensure long-term benefits for learners, institutions, and the broader educational ecosystem.
Creativity remains essential in higher education, enabling learners to imagine, innovate, and express original thought.
As GenAI tools proliferate, educators must protect and nurture human creativity—designing learning that values curiosity, experimentation, and divergent thinking—ensuring students remain active creators, not passive users, in an increasingly automated academic landscape.
Acceptable Use Framework
All use of GenAI should be acknowledged in coursework whatever form it takes, and this is highlighted in the student declaration that accompanies all submitted work.
Staff Responsibilities
To avoid students making unsupported decisions about how they use GenAI in each subject, staff should provide clear information about GenAI use in coursework and exams in the Programme and Module Handbooks, student facing assessment briefs and in class discussions.
Be specific about how disclosure should happen and what it looks like beyond the standard student declaration, e.g. an appended description of GenAi use with or without, (depending on academic level and module learning outcomes), a critically reflective commentary.
Design assessments that foreground process, use of information, reflection, and critical reasoning that goes beyond reproduction of information where possible.
Encourage students to demonstrate authorship, personal insight, critical analysis and originality.
Incorporate activities that explore and critique GenAI use, e.g. design thinking processes lend themselves well to interacting with GenAI tools. These activities offer opportunities to reflect upon and discuss responsible and effective GenAI use.
Review current assessment strategies and student assessment workload at the programme level to identify and manage risks.
Identify and address potential misuse through appropriate channels.
Student Responsibilities
- Understand and adhere to GenAI rules within each module.
- Ask questions and/or ask for help
- Disclose/cite any GenAI use that influenced submitted work using guidance provided by staff, the Library, the module handbook and the assessment brief
- Use GenAI tools to help learning ethically and with academic honesty
- Be prepared to explain or defend submitted work orally if requested
The framework
Use of GenAI in the context of coursework can be viewed through the lens of a sliding scale from fully acceptable to unacceptable.
Staff and students should familiarise themselves with the categories below, informed by and adapted from Perkins et al. (2024).
The examples provided for the categories are not exhaustive, and individual subjects may want to add specific instances.
Fully Acceptable
Low-Risk Use
GenAI is used sparingly for basic tasks:
- Writing assistance (e.g. grammar checks, paraphrasing suggestions, streamlining word counts)
- Study support (e.g. summarising notes, generating practice questions)
- Brainstorming ideas or outlining structure
This kind of use is characterised by a high level of originality and independent thought.
Ideas and approaches are entirely the student's own, with minimal external influence.
Permitted with disclosure.
Acceptable with Conditions
Moderate-Risk Use
GenAI is used to:
- Assist with structuring content, generating code skeletons, or visual assets
- Create examples or drafts that are heavily edited by the student
- Simulate feedback or peer dialogue for self-improvement
This kind of use is characterised by the student's original ideas that are enhanced by GenAI, but the core concepts and arguments remain their own.
GenAI contributes to refining and expanding the student's approach, but the primary direction and innovation come from the student.
This kind of approach needs to be made explicit by the module leader in the module handbook and student facing assessment briefs supported by opportunities to explain and discuss.
Detailed disclosure in coursework submissions is required.
Questionable Use
High-Risk Use
GenAI:
- Heavily influences final content with limited student editing
- Is used to answer assessment questions without demonstrating understanding
- Generates bogus citations or references not properly checked or cited
This kind of use is characterised by the student's input still being present, but AI-generated content plays a significant role in shaping the final output.
The approach may be a blend of the student's ideas and AI-generated enhancements, with a noticeable influence from AI tools.
Likely to breach assessment rules; staff should explain what this looks like in their assessments.
Students unclear about their approach should discuss with their lecturer before use.
Unacceptable Use
Academic Misconduct
GenAI is used to:
- Produce entire assignments, essays, or code with no meaningful student input
- Generate fake data, citations, or reflection
Circumvent exam or controlled conditions
This kind of use is characterised by an apparent lack of the student's original thought, with AI-generated content dominating the work.
The approach is largely driven by AI, with limited innovation or unique perspective from the student.
Prohibited. Subject to academic misconduct procedures.
Support and Development
Staff and students can access support through:
- Liaison with the Learning Enhancement Directorate
- Library guidance on referencing AI tools
- Guidance on Ulster University’s GenAi Webpage
Review and Monitoring
This document will be reviewed annually in line with developments in AI capabilities, academic integrity guidance, and student learning needs.
Resources
At a Course level, module coordinators should consider the extent to which AI is incorporated into their own teaching and assessment strategies. An example of guidance that can be shared with students (e.g., in course handbooks or via BBL) is outlined below:
Generative AI tools may be used to support your learning and the preparation of coursework. The tools can help you to learn about new concepts and can help you develop digital skills in the process. Some examples of how they can be used are as follows:
- Planning the structure of written work.
- Developing creative ideas and inspiration.
- Answering questions of web-based material.
- Helping to improve writing skills.
- Asking for an explanation of a topic.
However, while these tools can generate content that appears reasonable, they should not be relied upon to be wholly accurate, and you should be aware of their limitations. Some of the current limitations of Large Language Model (LLM) AI tools include
- The tools do not understand the meaning of the words they produce.
- The tools will often generate arguments that are factually wrong.
- The tools will often generate false references and quotations.
- Content generated is not checked for accuracy.
- The tools can distort the truth and emphasise the strength of an opposing argument.
- The tools may struggle to maintain contextual understanding over extended conversations however, there are current developments in this area.
- The tools may struggle to generate responses based on visual and auditory input.
- Generated content can include harmful bias and reinforce stereotypes. These biases can be reinforced through further human interaction with the model.
- The tools rely heavily on data access to generate responses. This has led to concerns about data privacy
- The models are trained on a data set from a Western English-speaking perspective again reinforcing particular perspectives. Developing skills to prompt AI tools is likely to be a useful digital skill but users should also understand the limitations, remain open, curious, and critical when making judgements about the accuracy of the content generated.
Importantly, unless you are specifically asked to do so, you should avoid using GenAI to create content for assessed coursework and research. The use of AI tools must not substitute your critical thinking, problem-solving skills and thought processes. Your work must be original and reflect your own informed perspective and understanding.
When using GenAI technology, you are expected to exercise responsible and ethical practices. This includes:
- checking whether the use of GenAI is permitted for a given piece of assessment
- understanding the limitations and potential biases of AI algorithms and limit their use
- follow cybersecurity principles when using AI tools and never input:
- personal information
- sensitive or confidential data
- copyright protected information
- critically evaluating the outputs of AI generated content,
- maintaining academic integrity by appropriately citing and acknowledging all sources
- keep records of draft work and notes
Acknowledging the use of Generative AI
Where generative artificial intelligence (AI) tools have been used for an assessment, they must be acknowledged appropriately to ensure that any output is not misconstrued as the student’s own work. Before beginning any piece of assessed work, students should check that the use of AI tools is authorised, as this practice may differ across modules and courses of study.
Use the below links to find out more information about citing and referencing AI in the Harvard style for your faculty.
- Citing AI Generative Tools in the Ulster Harvard Referencing style for LHS
- Citing AI Generative Tools in the Ulster Harvard Referencing style for CEBE, AHSS and UUBS
If using a different referencing style to Harvard, please contact your Library Subject Team.
The wider HE discourse has naturally been in relation to assessment and concern about academic integrity. Ulster has a long history of active learning pedagogies combined with authentic assessment design and the current AI in assessment discussions can help us to refocus on assessment design that measures active learning, critical thinking, problem-solving and reasoning skills rather than written assignments measuring declarative knowledge. Personalised, reflective accounts, developed iteratively, as understanding develops, are also valuable approaches and some subject disciplines have been using video and oral presentations to measure understanding and create a more personalised approach to assessment. These diverse approaches to assessment are identified as good practice across the sector; being more inclusive while reducing the risk of plagiarism.
The QAA has recently published guidance on how to approach the assessment of students in a world where students have access to Generative Artificial Intelligence (AI) tools.
There are, however, many practical reasons where alternative assessment or assessment redesign may not be practical, or changes may take time and many colleagues have become curious about AI detection software. Whilst there are tools that claim to detect AI they demonstrate varying levels of reliability. Jisc and the QAA have provided helpful information on these detection tools:
Jisc notes: “AI detectors cannot prove conclusively that text was written by AI.” Michael Webb (17/3/2023), AI writing detectors – concepts and considerations,
Before experimenting with any generative AI tool, you should give some consideration to privacy. We do not know what data is being collected, by whom, and how it is applied in AI when we use these tools. For this reason, you should not share personal or sensitive data - for instance it would not be appropriate to ask an AI tool to perform some analysis on a dataset containing student data. Currently ChatGPT (the latest version is GPT 4) can be tested free online but be careful, there are also paid for subscriptions.
You might start experimenting with the tool by asking a question such as:
- What are the ethical considerations of using Generative AI?
- Explain AI bias in a way that a child can understand
You can get more specific results by being more specific with your prompts.
- Tell me how [add query] works in 50 words.
- Behave as a higher education lecturer. [Add query]
- Write a four-paragraph summary about [add query]
- My excel spreadsheet has two columns, A & B, how can I find results that are in both columns?
You may also wish to test some subject specific prompts such as
- Build an HTML website homepage with three columns and a hero image
- Can you explain Standard Deviation using an Excel example?
Further guidance
Using AI to implement effective teaching strategies in classrooms (Mollick E & Mollick L, 2023)
Jisc National Centre for AI:
The QAA advises: “Be cautious in your use of tools that claim to detect text generated by AI and advise staff of the institutional position. The output from these tools is unverified and there is evidence that some text generated by AI evades detection. In addition, students may not have given permission to upload their work to these tools or agreed how their data will be stored.”
QAA (31/1/2023), The rise of artificial intelligence software and potential risks for academic integrity: briefing paper for higher education providers
OpenAI, as of 24th July 2023, have disabled their own detection service following concerns about accuracy.
A note on Turnitin AI detection service
Turnitin do provide an AI detection service writing which can be integrated within normal grading workflows. Instructors are presented with a prediction of the likelihood of a piece of work being generated by AI tools, such as ChatGPT, when they are grading a paper. However, The AI working group, at Ulster, had many concerns from an ethical, accuracy and privacy perspective and made the decision to disable this tool. This was very much in line with the UK Higher Education sector. Turnitin can provide no evidence as to how the AI score was generated thus making any academic integrity judgement difficult.
Ulster has a long history of active learning pedagogies combined with authentic assessment design. Current AI in assessment discussions can help us to refocus on creative assessment design that measures active learning, critical thinking, problem-solving and reasoning skills rather than written assignments measuring declarative knowledge. Personalised, reflective accounts, developed iteratively, as understanding develops, are also valuable approaches and some subject disciplines have been using video and oral presentations to measure understanding and create a more personalised approach to assessment. These diverse approaches to assessment are identified as good practice across the sector; being more inclusive while reducing the risk of plagiarism.
Authentic Assessments for the AI Era
The webinar below focusses on how education providers and instructors are reconsidering effective and authentic feedback and assessment strategies as AI tools and resources become readily available to students around the world
TED Talk: How AI could save (not destroy) Education
Sal Khan, the founder and CEO of Khan Academy, believes AI can spark the "greatest positive transformation education has ever seen."
Further Information on AI in Assessment
- QAA 's 'Reconsidering assessment for the ChatGPT era'
- Advance HE: Authentic assessment in the era of AI
- Rethinking Assessment in the Age of AI
- Perspectives on redesigning assessment practices in an AI world
- Using AI tools in assessment
The AI working group is establishing Faculty/School-based subgroups. The aim of these groups is to explore the use of AI tools within discipline-specific contexts and to reveal examples of good practice, plus associated challenges. Outputs from these groups will be shared on this site in due course.