The University is developing new guidance to support staff in integrating AI ethically, transparently and confidently into their teaching, learning and assessment practice. This guidance aims to help staff make the most of AI’s potential to enhance learning and academic practice, while upholding academic integrity, rigour and authenticity. It encourages staff to view AI as a valuable tool for innovation and efficiency, with a continued focus on credible evidence of student authorship, achievement and meaningful engagement with learning outcomes.

The full guidance document will be available here as soon as it becomes available.

Further information is available in the summary below ...

image of woman using ai on two computers

Summary

The University encourages staff to educate themselves about AI and use it to enhance academic practice, focusing on authenticity, authorship and alignment with learning outcomes, rather than 'policing' AI use. The preferred summative assessment model is a hybrid of synchronous and asynchronous elements to ensure credibility and verification of student work.

Track A (Hybrid), combining take-home (asynchronous) assessments and real-time elements, is the preferred approach for modules of 30 credits and above, while asynchronous-only assessments will be limited to modules of up to 15 credits.

Students must acknowledge AI use in their submissions with a short form outlining if (and if so, how) AI supported their work. This short form must be included in all summative assessment submissions. The absence of, or inconsistency in acknowledgements may lead to academic misconduct processes.

AI tools are not currently permitted for uploading student work, marking or feedback due to data protection and integrity concerns. Staff may use AI for personal productivity but must not use identifiable student data. AI detection tools are not currently allowed.

Assessments should reflect real-world skills and balance synchronous and asynchronous components for reasons of credibility and authorship, and to promote authenticity, creativity, rigour and fairness. Over-assessment and workload implications should be considered in assessment design. Co-creation with students is encouraged wherever possible.

From September 2025, AI acknowledgements (item 3) are to be incorporated in asynchronous summative submissions, with progressive phasing out of asynchronous-only assessments, normalising Track A (Hybrid) as the standard and default assessment model in modules of 30 credits and above.

The University offers training, workshops and resources via CADI to help staff engage with AI in education, including creative uses, assessment redesign, ethics and integrity. Staff unfamiliar with AI tools and their uses are expected to undertake relevant training.

We offer the following AI workshops: 

Visit Leveraging AI in Learning and Teaching for additional guidance.

Learn more about Current Position on Exams.