This is probably the most problematic component of the life-cycle as it is the area where variety of pedagogic practice results in a situation where the fit between institutional processes and the functionality of commercially available systems is least well matched. We heard a very clear message from the sector that existing systems do not adequately meet institutional requirements in these areas. A basic issue is that marks and feedback are different things and need to be handled differently but technology platforms tend to conflate the two.
Models of marking
Systems seem too often to be predicated on an assumption that 1 student = 1 assignment = 1 mark. This model may usually be adequate for formative assessment but does not meet UK requirements for summative assessment processes. Systems would ideally offer a range of different workflows based on different roles e.g. first marker, second marker, moderator, external examiner etc. There is a discussion on models of marking on the Jisc EMA blog.
‘We have local practices that vary but work to a common aim and meet our regulations. … When working with schools and programmes most of these variations can be met using the tools we have at hand but require workarounds that take time and act as a barrier to staff adopting elements of EMA.’
At present there are considerable risks (realised all too often in practice) of second markers and external examiners overwriting or deleting comments made by an earlier marker. There are also difficulties in recording decisions taken during the moderation process (more on this in the section on recording grades below). It appears that some of these workflow issues are handled better within existing e-portfolio systems and there is a need to look at which aspects of functionality the suppliers of assessment management systems could look to adapt and apply.
To use the example of Turnitin/Grademark – it is possible for two people to mark the same assignment (assuming the model is for open, sequential or parallel marking) but the system does not distinguish between the two sets of comments and marks so the onus is generally on the second marker to identify their comments. The earlier comments are of course visible to the second marker so ‘blind’ second marking is not supported (workarounds such as duplicate submissions or marking sheets stored externally are needed in this model). The situation is further complicated where a group of markers takes on first and second marker roles and divides a cohort between them.
‘The main difficultly is in matching existing assessment processes and University policies with what is possible within the software. This is especially true of the mandatory use of anonymity which creates multiple difficulties in administration. Finding workarounds for moderation and external staff again creates manual work that takes away from the benefits.’
‘Neither Turnitin or our internal system have a means to record online the marks of two markers – nor is there an opportunity for a moderator to record any decisions.’
‘The biggest issues here are around how moderation or second marking happens and how the marks and changes are recorded: our workarounds range from simple things using different colours to more complex solutions using hidden fields in the gradecentre or different hidden columns. If systems like Turnitin allowed different marking layers for second markers this would be a great help.’
Anonymous marking was the subject of much discussion during the research for this report and it is clear that a requirement for anonymity poses various difficulties in relation to the main commercial systems that support EMA e.g.
– being easily able to identify which students have not submitted where there is full anonymity
– students being required to use an ID yet still writing their names on papers
– identifying students with special needs or mitigating circumstances
– anonymity potentially being lost once data is returned to the VLE
– marking and moderation that needs to take place after the return of feedback to students (when anonymity has to be disabled in many systems).
This is fundamentally a pedagogic issue with both technical and process implications. In response to our online questions almost a quarter of respondents (23%) said this area was ‘very problematic’ whilst a similar number (24%) said it was ‘not a problem’. The basic reason why it is a huge issue for some institutions and not for others is down to pedagogic practice and hence policy. Some institutions (often in response to student demand) have very strict requirements for anonymous marking to ensure fairness whilst others (generally again citing student pressure) believe anonymity has no place in the type of learning and teaching they deliver. There is more discussion on the topic of anonymity and educational principles on the EMA blog.
‘Anonymity is another contentious area as staff have mixed views on it though it is institutional policy.’
The QAA (2012) notes that the nature of assessment in many disciplines( e.g. performing arts) makes anonymity impractical and also states: ‘In particular there is a tension between the perceived benefits of anonymity and its negative impact on the giving of personalised feedback. Evidence suggests that feedback is more likely to be heeded by the student where the feedback is tailored to the individual student based on the marker’s knowledge of that student’s progress.’
The distinction, even within institutions who have a requirement for anonymity, is not, however, clear cut as there are various perspectives on what constitutes anonymity and at what point, if any, in the process anonymity is lifted so that markers can associate work with individual students.
‘Returning marks and feedback to students necessitates lifting anonymity – this is a problem because it forces staff to choose between giving timely feedback and preserving anonymity.’
In EMA terms, anonymity is handled in various ways, most of which seem to be problematic. Students can be required to input an ID but this does not stop them including their names on submissions. Administrators are sometimes used as the ‘glue’ so that they can match up names and numbers. In some cases anonymity is possible in assessment management systems but lost once data is returned to the VLE and there are often particular workarounds (such as cover sheets) needed to ensure that special needs and mitigating circumstances are taken into consideration where anonymity is a requirement.
To take the example of the Turnitin system: the system maintains a simplistic form of anonymity up to the point, known as the ‘Post Date’ when feedback is released to students at which point the student name is appended to the submission file name. This can cause difficulties in managing extensions to the agreed submission date as feedback is released to all students at the same time unless separate submission processes are created for these students. It also means that any form of anonymous second marking, moderation or external examining that takes place after the post date requires the intervention of an intermediary who has to download and re-anonymise the work then pass it on. Additionally the anonymity can be switched off at any point prior to the Post Date by any academic teaching that group of students and cannot subsequently be re-enabled.
‘The departments who have moved over entirely to EMA are experiencing problems with managing internal moderation and feedback to students whilst maintaining student anonymity (which is required by our regulations). In order to maintain anonymity, they are having to go through complicated workarounds.’
It should also be noted that EMA, with these workarounds, really only serves as a means of potentially helping to avoid unconscious bias in marking. Given that settings in the VLE systems are trust based, deliberate malpractice (however unlikely this may be) is usually technically possible (although system logs would provide evidence in the case of an investigation). However, as one participant in the research noted:
‘Such opportunities for malpractice abound in many areas of academia as a consequence of the high degree of workplace autonomy academia requires. To design an entire e-assessment approach round an assumption of deliberate malpractice on the part of academic markers would be extreme and bring many and negative side effects.’
Individual marking practice
Once we reach the topic of the individual marking practices of academics, the issues are equally complicated but also deeply personal, relating as they do to an individual’s established practice and preferences. There are some general issues around the ability of systems to deal with mathematical and scientific or musical notation but, aside from this, many of the issues relate to personal preferences as to whether or not tutors like to mark on screen. For those who are prepared to undertake e-marking there is also a distinction between online or off-line marking.
‘Academic staff have to make the biggest adjustment for probably the smallest gain with the transfer to e-marking. Lots of wins for the admin staff and students but academics have to sit at a screen for long periods.’
‘Staff resistance to online marking is much less than it was a few years ago though there are still pockets of dissent.’
Reported benefits of e-marking for academic staff include:
– the convenience of not having to collect and carry large quantities of paper
– the convenience of electronic filing
– the security of having work backed up on an online system
– the ability to moderate marks without having to physically exchange paper
– the increased speed and efficiency of being able to reuse common comments
– improved morale through not having to write out repeated comments
– the convenience of being able to undertake originality checking in the same environment as marking
– improved clarity of marking and feedback (especially the ability to include lengthy comments at the appropriate point in the text)
– improved consistency of marking
– ability to add audio comments and annotations as well as typed comments
The issues relating to improved clarity (particularly not having to decipher handwriting) and consistency as well as the security and convenience of the medium are also the main benefits to students.
In a post on the EMA blog we look at recent research into experiences of online marking and in particular at the work of the University of Huddersfield EBEAM project which undertook a detailed analysis of staff attitudes to the topic and effective approaches to encouraging different types of staff to adopt new working practices. The discourse of resistance to online marking appears to be highly personalised e.g. some older members of staff may cite eye-strain as an issue with online marking whereas others of the same age group would cite the affordances of technology to adapt to their personal needs and make reading easier.
The University of Huddersfield concluded that a strongly directive approach to e-marking is likely to be counter-productive and that academics should be allowed to continue working in the way in which they feel most comfortable whilst the institution continues to emphasise the benefits of e-marking and reward those adopting the practice through a reduction in administrative duties: ‘… it is important to build a strategy and a system which provides each group with the support they need but also offers rewards and applies pressure in a consistent way such that moving away from paper–‐based marking and into e-marking makes the most sense to as many of them as possible.’ (Huddersfield)
The Jisc EMA blog has a discussion on ‘The right tools for the job’ that looks at the affordances of different marking tools. In the context of the quote below, ‘online marking’ refers to marking whilst continuously connected to the Internet, whereas ‘electronic marking’ includes both this element as well as marking on computer whilst not physically connected to the Internet.
‘… staff (and the University) confuse electronic marking with online marking and thus electronic marking tends to mean online marking which tends to mean GradeMark. There is thus a tendency to only (or to a large extent) support GradeMark as people perceive it to be the tool that the University want them to use. So, excessive focus on the tool as opposed to the process of providing electronic feedback. We should offer flexibility to staff in how they want to provide feedback. If there is a desire to support electronic marking (and not just feedback), then a (any) University should offer support for various forms of marking and various tools and not just concentrate on a single tool (or allow staff to think that is the only tool).’
The ability to support off-line marking is a big issue for many institutions not least because downloading submissions compromises anonymity in many systems which automatically add the student name when files are downloaded. The Turnitin product now supports off-line marking on an iPad only but there are reported issues with information being overwritten when changing between devices or during the moderation process although this is an issue across all marking platforms.
‘Off-line marking of submitted work is a big demand that cannot be met by our present processes if we maintain our anonymous marking policy.’
MMU guidance on Marking and production of feedback.