Towards a generic assessment and feedback lifecycle model

The assessment and feedback lifecycle shown below was developed by Manchester Metropolitan University and has already been adopted/adapted by a range of other HEIs. At our EMA Think Tank on 14th May we discussed the model with a range of HE providers and there was general support for the idea that a generic model could be used by the sector in various ways such as:

• a means of helping individual stakeholders take a holistic view of assessment and feedback activities
• a prompt to support academic decision making during curriculum development
• a starting point for process review and improvement
• a starting point for a technology roadmap
• a means of clarifying requirements to system suppliers

Such a model needs to recognise that there is no such thing as a ‘one-size-fits-all’ approach (usually even within a single institution) rather it is a framework to stimulate discussion. There is the potential to turn such an approach into a toolkit similar to that produced by the University of Ulster Viewpoints project which has been very well received in a large number of institutions.

We have since discussed the model with FE providers as well and identified no reason why the top level model does not work just as well for this sector.

We would now like to open up the discussion to a broader range of providers. We invite you to think about how the lifecycle relates to your own context and add any comments (please note you need to click on this post and open it to see the comments box).

Prompts for general consideration around each stage of the lifecycle

• Who are the key stakeholders?
• What are the information needs: who, what, when why?
• What processes are still typically manual/paper/offline. Which of these activities require academic judgement/which are administrative?
• What processes are already supported by technology. Which of these activities require academic judgement/which are administrative?
• How do the processes vary with different types of assignment?
• What are the existing interfaces between the technologies?
• Where is there potential to make more/better use of EMA technologies?
• What are the key problem areas?
• What examples of good practice do we already know about?

(Thanks are also due to the University of Portsmouth for helping us refine our thinking on this topic.)


The University of Manchester, Faculty of Humanities, has created a similar lifecycle split into academic and administrative tasks – this is discussed in the comment by Anna below – we can’t add her diagram to the comment so here it is.

Manchester Acad-Admin split

8 thoughts on “Towards a generic assessment and feedback lifecycle model

  1. Gill Ferrell

    In discussions at our Think Tank it was suggested that ‘Supporting’ isn’t a stage in the lifecycle but rather something that sits around the whole lifecycle – supporting both staff and learners at all stages. I think I’m in agreement with this.

  2. Gill Ferrell

    We have had some further clarification from MMU about the ‘supporting’ element of the lifecycle – in their view this is about supporting the student in the period between “setting” and “submitting” – so basically how the academic processes support assessment. It was not the intention that this element should cover overall support for EMA so moving “Supporting” from where it is in this model would skew the intended purpose.

  3. Neil Gordon

    It seems an appropriate model for linear assessment – though perhaps consider a sub-cycle from 5 back to 3 and / or 4 to allow for multiple submissions. This is particularly the case for computer based marking.

    1. Rachel Forsyth

      Neil’s point is a good one if it is the same assignment being resubmitted. The original idea for the model was our attempt to map what happens to/with a single assignment task and to make sure that our systems (begin to) join up these different aspects.

      The ‘specification’ part of the cycle, in our institution, is likely to remain the same for several years, unless the ‘reflection’ part indicates that it needs to be changed, in which case a formal modification needs to be made. Not sure if that counts as missing out a step as you go back round the cycle, or having a sub-cycle between stage 2 and stage 8. I am sure there will be other situations where there are mini-loops in part of the cycle (eg when second marking is used).

  4. Gill Ferrell

    Good point Neil – allowing for this iterative loop was the key idea in inserting this stage.
    We have also had some further input from Rod Cullen at MMU who explained how it is used in practice there: support activities might include things like assignment tutorials, submission of drafts and related provision of feedback. MMU has also used the supporting phase on some courses to provide regular formative MCQ quizzes linked to tutorials for feedback purposes.
    Such activities need to be built into the overall assessment strategy design (particularly in the context of formative assessment) but also require consideration in relation to technical aspects of EMA. As an example Turnitin can be setup to allow draft submission and feedback on work in progress but, on trying to use this facility, MMU realised that this led to problems identifying when the student had actually made a final submission. In this case a workaround was possible by setting up different submission boxes to support formative (draft) and summative submissions for a specific assignment.

  5. Anna Verges

    I hope I am not mis-understanding the model … but perhaps between 4 and 5 we need an administrative step like ‘submissions administration’ to cover tasks such as identification of non submitters, administration of extensions)?

    Related to this same point but more broadly, given that the Assessment Lifecycle concerns fundamentally 3 stakeholder (students, academic staff and administrators) we drew a process that has a inbuilt difference between tasks that are performed by academic staff and tasks that are performed by administrative staff. That is: it looks like a 2 columns with a time line dividing the 2 columns. On the one hand are the administrative type of tasks e.g. identification of non-submitters and on the other it has the academic tasks.

    I wonder whether this addition of academic/admin roles is adding too much detail into the model but I think it is useful in visualising that there are educational and administrative aspects in the assessment process and that technology needs to be able to do both equally well. On the other hand, what I like of MMU model that ours has not achieved is to see the process as circular – rather than linear.

  6. Gill Ferrell

    Unfortunately we don’t seem to be able to add files to comments so Anna’s diagram has been added to the bottom of the original blog post.

  7. Pingback: TRansforming Assessment + Feedback For Institutional Change (TRAFFIC) at MMU | Electronic Management of Assessment

Leave a Reply

The following information is needed for us to identify you and display your comment. We’ll use it, as described in our standard privacy notice, to provide the service you’ve requested, as well as to identify problems or ways to make the service better. We’ll keep the information until we are told that you no longer want us to hold it.
Your email address will not be published. Required fields are marked *