This is one of the most important components of the life-cycle in that it is where the real student learning takes place and yet it is one of the areas least well supported by the existing commercial systems. Strategies need to be in place to ensure students read and engage with their feedback (rather than just their marks). One of the big problems however is that feedback against individual assignments is stored at a modular level and it is difficult for students and tutors alike to gain the kind of overview needed to support longitudinal development. This is a particular problem for personal tutors who need to understand how students are performing across a range of units, but may not teach on any of those units and thus do not have access to any of the marks or feedback.
‘Once feedback is given, it’s locked away in fragments in its respective Moodle and Turnitin boxes and beyond the purview of the Personal Tuition system we run here.’
‘We have developed processes to make sure our feedback is recorded in line with our records retention policies and are stored locally backed up and away from servers we haven’t control over. However we are interested in exploring ways feedback can be recorded and accessed in one place for students, allowing for reflection and engagement with feedback across modules.’
‘The feature of separating feedback from the release of marks/grades is also not built into current systems. What would be appropriate is that the release of feedback prompts the student to write a short reflection upon receipt of this the mark is released. The reflection is read not by an academic but by some support staff. This info should be entered into a separate repository for QA monitoring purposes.’
A post on the Jisc EMA blog about ‘Using technology to close the feedback loop‘ considers this topic and looks at some good examples including:
– The University of Westminster Making Assessment Count (MAC) project which aimed to transform the student experience of assessment by engaging them in the process of reflection on feedback for learning and development. The outcomes are summarised in the project report and a MAC toolkit is available to other institutions.
– The University of Dundee interACT project which placed great emphasis on creating the conditions for dialogue around feedback.
– Sheffield Hallam University‘Technology, Feedback, Action!‘ project.
There are lots of other examples of good practice although a commonly heard issue is that many of the best examples have been developed with fairly small numbers of students and can prove difficult to scale up to much larger numbers. An issue with the Westminster project was the stand-alone nature of the e-Reflect tool used to support the MAC approach and a subsequent project was developed to make e-Reflect LTI compliant.
‘The biggest pain point is getting an assessment system that works and is scalable. Small products work fine with small groups but not scalable, enterprise products that are scalable encounter problems with design and usage.’
Learning and assessment analytics can have an important role to play in this part of the process. Learning analytics is itself a relatively new field and assessment analytics is currently an underdeveloped part of this. There is some interesting work being undertaken by Manchester Metropolitan University and the University of Huddersfield [the report willlink to case studies].
MMU guidance on Reflecting.