Can Copilot be your Wingman for Marking and Feedback?
Strand 2
Time: 2:35pm to 3.05pm
Theme: Assessment
Location: Richmond LT2
Presenter: Sally Cheng, Martin Hoskins, Gavin Knight, Claire Perry and Mary Watkins
Abstract:
Using Microsoft Copilot Studio, we created an AI agent capable of marking and generating feedback on summative written assessments. The agent was designed using only the assessment brief and marking criteria, allowing us to evaluate how effectively it could interpret expectations and apply academic judgement. After uploading a sample of student submissions that had already been marked by module leaders we were able to compare agent-generated outputs with human marking. We focused on the quality of the agent’s feedback and its ability to align its grading to the criteria. The agent was not used to generate marking or feedback that was shared with students. Students were given the option to exclude their submission from the pilot and so the exercise was a controlled evaluation of capability rather than a live teaching intervention. Our presentation will reflects on our contribution to the Jisc AI in assessment pilot, highlighting both the strengths and limitations of this approach. We consider where such tools may enhance academic practice and where disciplinary expertise remains critical. Ultimately, we ask whether Copilot is best understood as embedded within the task itself, or operating alongside it, understanding it less as a co-pilot and more as a wingman.