End-user perspectives on the challenges

As part of the next phase of the project we’ll be exploring the prioritised challenges identified through a series of visualisations and key statements. We would like to ask for your help in bringing these challenges to life, and would like you to share anecdotes, experiences and stories from your and your users’ perspectives. You can download the visualisations below – and share your thoughts through the comments field on the blog (by Friday 5th December at the latest).

JiscEMAchallenges

47 thoughts on “End-user perspectives on the challenges

  1. Rod Cullen

    Priority 18 Ability of systems to support variety of grading schemes

    We are currently wrestling with how to manage students who are Repeating Assessments without Attendance (REPWOA). The systems we have in place don’t cope well with this. Keeping a tracking of who is required to submit what and when is difficult and this is complicated further when multiple systems (e.g. VLE, a coursework management system, a student records systems). All of these systems must be used together and each system must fit into the actual process. In effect with REPWOA the student is submitting this year to an assessment which was originally set in the previous year (or years) so to make decisions about progression systems must be able to keep track.

    Reply
    1. Rachel Forsyth

      The system also needs to be able to record the difference when students are given the opportunity to resit these assignments ‘as if for the first time’ – ie without any penalty.

      Reply
      1. James Trueman

        I completely agree Rachel – that for us is an example of where our students with mitigation are not easily tracked and supported within a system.

        In essence – current EMA systems appear to operate better where there is an assessment with a single (and preferably ‘not’ anonymously marked) group of students, all of whom are submitting on the same date – and successfully do so. Insert variation into that and the systems begin to struggle. Rightly, the need for the system to support different marking and moderation workflows is being discussed – but before that it needs to be able to manage different ‘submission’ workflows: e.g. – single author, multiple author, first submission, second submission, mitigated / appeal resit (treated as first submission), mitigated / appeal resit (treated as second submission), individual authorised extensions (approved before deadline), individual authorised extension (approved after deadline based on claim made with submission), open late submission (without a second deadline), open late submission (with a second deadline), weighted / percentage grade penalty deduction based on time of submission after original deadline, capped grading penalty etc., etc.

        Reply
  2. Rod Cullen

    Priority 5 – Student engagement with feedback

    I’d like to make a point about timing of feedback. When we initially came up with the assessment life cycle model we envisaged a strong link between the “Supporting” stage and the “Reflecting” stage. Supporting may potentially include “formative” underpinning of “summative” assessment. I have undertaken several action research projects using optional draft submissions at the supporting phase. Consistently, students who submit and receive feedback on drafts do better (usually by a whole classification band) than those students who chose not to. see http://journals.heacademy.ac.uk/doi/abs/10.11120/beej.2012.20000116. This has lead some colleagues to think really carefully about the type of feedback they provide at different stages of the life cycle. At the supporting stage they focus on the task at hand and at the Reflecting stage they focus on future tasks.

    Reply
    1. Gunter Saunders

      There are some good examples of processes, sometimes supported by specific software systems, that help/encourage students to engage with the feedback received on some work in a potentially iterative process with tutors (sometime personal tutors). There are issues though in getting widespread take up of these kinds of approaches. Although staff acknowledge generally that such processes are good they also rightly point out that the iterative part takes time. In addition the poorer students tend not to engage. Two shifts seem needed – staff need more time (or spend less time on other things e.g. admin or maybe even lecturing) – students need help to better understand why reflection and dialogue around their work is so important. This links also to priority 4 and priority 6.

      Reply
    2. Anna Verges

      We would love to see the software to integrate pedagogically sound practices in the ‘mechanics’ of the EMA process. I am referring to things such as the ability in the software (say Turnitin or Blackboard assignments) to integrate self-assessment tasks e.g. students self-evaluating their own work before submitting, or the ability once the feedback has been returned to carry on the feedback conversation.

      Engagement with feedback is difficult to achieve if the software understand the feedback as a unildirectional provision. Software needs to facilitate dialogical feedback, self-assessment and engagement wiht assessment criteria, as well as facilitate the collection of feedback into feedback portfolios (longitudinal views – JiSC’s EMA challenge 10)

      At a recent demo of Bb Road map, Bb representatives showed how in new iterations of the product an instructor was going to be able to set up a discussion board after the grades and feedback is realised. That is an improvement somehow but software providers need to capture feedback as a learning task – not as an end product but as a dialogical and longitudinal process.

      Reply
      1. Chris Turnock

        I also think the dialogue should not only be concerned with the completed assignment, but also consider future assignments. One technique I have seen used involves identification of lessons learnt based on the feedback (and dialogue) for a submitted assignment. Student and marker/supervisor then identify action(s) for related future assignment(s). The student then asks person marking the future assignment to focus feedback on specific aspects of the assignment based on the aforementioned actions.

        Reply
    3. Anna Verges

      We would love to see the software to integrate pedagogically sound practices in the ‘mechanics’ of the EMA process. I am referring to things such as the ability in the software (say Turnitin or Blackboard assignments) to integrate self-assessment tasks e.g. students self-evaluating their own work before submitting, or the ability once the feedback has been returned to carry on the feedback conversation.

      Engagement with feedback is difficult to achieve if the software understand the feedback as a unidirectional provision. Software needs to facilitate dialogical feedback, self-assessment and engagement with assessment criteria, as well as facilitate the collection of feedback into feedback portfolios (longitudinal views – JiSC’s EMA challenge 10)

      At a recent demo of Bb Road map, Bb representatives showed how in new iterations of the product an instructor was going to be able to set up a discussion board after the grades and feedback is realised. That is an improvement somehow but software providers need to capture feedback as a learning task – not as an end product but as a dialogical and longitudinal process.

      Reply
  3. Rod Cullen

    Priority 4 – Need to develop more effective student assessment literacies

    We have recently been asked to consider providing students at level 4 with the opportunity to undertake in-year reassessment. So students who fail an assignment before Christmas who get the chance to be reassessed during the spring term rather than over the summer following the exam board in June. Although I have some sympathy with this my initial thought is why don’t we try and pick up the fact that they are struggling before they had in their first assignment?

    I have been convinced by the work of people like Harry Torens that student experience of assessment and feedback is very different at the FE and 6th Forms that the majority come from than it is at university. they are used to a lot more in-assessment support and provision of feedback. Rather than dealing with early failure through in year re-assessment I think we need to be look at redesigning level 4 so that it provides a more scaffolded approach to assessment at level 4 with a much greater emphasis on the “Supporting” stage of the life cycle. We need to explain to students through dialogue around assessment practice how things are going to be different at level 5 and use level 4 to get them ready for this.

    Reply
    1. Rachel Forsyth

      There are lots of concerns about in-year retrieval, too – are we piling up work around the time when students are doing other scheduled assessments?

      I agree that it would be good to pick up problems earlier, but this isn’t always easy, especially with big classes.

      Is redesigning level 4 something that is a prerequisite to engaging with EMA? That might present a significant barrier for institutions.

      Reply
  4. Rod Cullen

    Priority 2 – Reliability of submission systems

    For the last two years we have had significant problems with TII falling over at critical periods just before Christmas and just before Easter. Even with a good plan B in place to cover such eventualities the level of stress is causes for staff and students alike is significant.

    I think it is probably essential to separate off the upload system from the submission to the marking or management tool. Once the students has uploaded their assignment they can relax and the submission can be held in a queue until the required system is back up and running.

    Reply
    1. Chris Turnock

      It is also important to remember that staff need access to submitted work to mark within relatively tight deadlines. Whilst resistance to digital marking may be less than several years ago, there remains a body of sceptics looking for reasons not to engage with electronic marking.
      We may be able to develop methods to protect students from the failings of a third party system, here at Hull we get students to submit through the VLE (Sakai) then export assignments to Tii. Students believe assignment submitted so can relax and the VLE keeps attempting to export assignment to Tii.
      The problem is marker can only use GradeMark once student assignment in Tii. Performance problems will frustrate most academics, but will provide cynics with excuse to not engage in digital marking. Of course other VLE platforms provide an alternative to GradeMark and that may be the future if Tii performance problems persist.

      Reply
      1. James Trueman

        I fully understand why a number of institutions use multi-phase submission processes – as this generates the ‘security’ of submission for their student body. The other side of this however, is that I believe the resultant cycle of attempts to export to Tii multiplies the load on that system – as the same papers are being re-sent. Consequently, 1000 calls quickly becomes 10,000 calls and the system struggles to right itself. Perhaps we need a different solution?

        Reply
  5. Rod Cullen

    Priority 1 – Ability to handle variety of typical UK marking and moderation flows

    I’d include in this the construction of scoring rubrics that allow easy weighting of marks between criteria and allocation of specific marks within grade boundaries. Simply marking an individual assignment out of 100 using Moodle and TII scoring rubrics requires some really complex calculations at the moment.

    Reply
    1. Chris Turnock

      We may also want to consider being able to handle student centred approaches to marking, i.e. self and peer assessment, as well as group assessment. The latter becomes complex when assessment may have a number of component marks form various sources, e.g. staff mark for the collective output, staff mark for individual student output such as reflective diary, student mark rating personal contribution to the group exercise, student mark ratings other group members’ contributions to the group exercise.

      Reply
    2. Anna Verges

      At the Faculty of Humanities (Manchester Uni.) we carried out a preliminary audit of marking workflows and we identified 4 major marking workflows that capture a large number of departments.

      These are:

      1. Moderation before feedback is returned to students
      2. Moderation after feedback is returned to students
      3. Second marking before feedback is returned to students
      4. lind second marking before feedback in returned to students

      In other words, we think that the key variables are
      a) a distinction between moderation and second marking
      b) whether QA process take place before or after students receive their grades/feedback

      We found that the terms moderation/second marking are often used interchangeably which does not help the analysis and so we have made an effort in agreeing on the terminology first. In moderation systems moderator looks a sample of scripts and feedback goes to first marker. In second marking models all papers are marked by 2 individuals and students received 2 sets of feedback.

      External examiner moderation applies to all 4 models but it tends to happen after internal QA has been completed, so I have left it outside.

      I would be interested to hear if other institutions feel that most of the workflows in their respective institutions are captured by these 4 workflows. I say most indeed, cause I have also come across algorithmic models and other peculiar systems, but, in our case, these account for a small amount of existing workflows.

      Reply
      1. Rachel Forsyth

        I think this would capture much activity. We also have third marking for situations where first and second markers can’t agree. We’d also possibly distinguish internal and external moderation. We have tried to implement clear definitions of second marking and moderation as one of the steps we’ve taken to prepare for EMA. <A HREF="http://www.mmu.ac.uk/academic/casqe/regulations/docs/verification_marking_moderation.pdf&quot; our procedures are here – you’ll see that we also include ‘team marking’ as well as ‘independent second marking’.

        Reply
  6. Rod Cullen

    Priority 8 – Academic resistance to online marking

    I have found that having the right kit can really help in this respect. This doesn’t mean tat there is a single set up that will suit everyone. Personally, I prefer marking onscreen using a pair of large monitor screens. I like the discipline of having to sit at a my desk in my office at home where it is nice a quiet and I can concentrate properly on the task at hand. Like many colleagues I now work in a shared office at work (with nearly 30 other colleagues) and I find I can’t mark effectively in this type of environment – I find it difficult to concentrate and also things like audio recording of feedback is impossible. My line manger understands and I have a great deal of flexibility to work from home and it works really well for me, but I know that some line managers are less understanding.

    Other colleagues seem to like the iPad (e,g, the app for TII) as they like to mark on the train or in other more informal places – this works for them. If find the iPad too small and the screen touch pad is a pain for anything other than typing a couple of words. Plus I don’t own an iPad. There can be further difficulties when colleagues marking the same assignment prefer to work in different ways. Here I think the software providers need to do more to ensure that their tools are multi-platform. Why for example is of line marking for TII only on the iPad – it is complete madness and so shortsighted of them. So three key things here for me.

    1. Cross-platform compatibility
    2. Providing the right kit for the job
    3. Management need to be supportive of flexible working patterns

    Reply
    1. Anna Verges

      The feedback from our academics is that they would like to be able to bulk download, mark in the native application (Word or Pdf ) and then bulk upload. Seems straight forward but neither of the 2 major providers (Tii or Bb) are able to do so!

      The second requirement for offline marking was that academic needs to be able to download and upload selectively i.e. download only thosee files that one wanted to download or upload, especially useful where more than 1 academic is marking a given cohort – otherwise it is just too easy to overwrite colleagues marking…

      Bb has taken the work developed in the UK and produced an ‘Assignment Handler’ building block that allowed the bulk upload. However from discussions with institutions that have used it this building block needs ‘perfecting’.

      Reply
  7. Chris Turnock

    Priority 5 – Student engagement with feedback

    Several tools have been developed to enable students make better use of assessment feedback. In my experience there are a number of issues that need to be considered:

    1. Students need to be prepared and supported in how to make best use of this feedback for future assignments. Students need help to learn how to make effective use of feedback, especially as feed forward information. One practical way is to ask students to analyse the feedback from an assignment and identify a particular aspect of their next assignment that the student would like the marker to focus on in nature of the feedback.
    2. Staff need to build into their workload the time to enage with students and support them in understanding feedback, otherwise there is a risk students become frustrated at the paucity of support.
    3. Feedback needs to be thought about at the level of the individual assignment and/or module, but it should also be thought of more holistically. Personal academic supervision at a programme level would be enhanced if personal supervisors/tutors could see all feedback an individual student has received over a semester/year.

    Reply
  8. Chris Turnock

    Priority 9 – Need for greater creativity
    Whilst many universities promoting use of a broad range of assessment types, many academic staff need practical help in how technology can help them meet this objective.
    I have found the use of a blog to not only help bring variety and creativity into the assessment process, but also spreads burden of work over a semester for both the student and the marker. One way I did this was by giving students a short weekly task requiring students to set out how they planned to address the task, report what they did and describe the outcome. This was done over ten weeks, with the task report for each week allocated up to 10%.
    There are also tools for peer assessment. Some VLEs facilitate it well as Turnitin and WebPA. I have experience of using the Blackboard Peer Assessment tool, which did result in a higher level of student engagement with an activity that involved students writing an information sheet for recruiting people into health care research. The exercise had previously been dry and unpopular with students, but the introduction of peer assessment led to a marked change in both level of student engagement and quality of student work.

    Reply
  9. Chris Turnock

    Priority 2 – Reliability of submission systems

    This is one of the main risks associated with electronic submission of assessed work. Having the ability to stress test systems is paramount in terms of volume of student submissions and nature of student work, e.g. file size and acceptable file format.

    Whilst the sector is very aware of the performance problems of one widely used system in coping with the volume of work being submitted at certain times in the academic year, systems also need to consider student work that consists of very large files, e.g. >1Gb.

    Another important consideration is ensuring the marker can open and so assess a student submission. Systems need to ensure students only submit work that academics can open and mark.

    Reply
    1. Justin Steele-Davies

      This is very interesting, the JISC funded eAssignment allows the coordinator to set the file type explicitly so that it is clear what is expected of students. This year at Southampton we have had files in excess of 1.5GB uploaded for assignments by students.
      When using the originality detection tool, the student files are submitted over the course of several days with back off to allow for the service being unavailable. Students can still submit work but no checks are made until the 3rd party system is up.

      Reply
  10. Chris Turnock

    Priority 1 – Ability to handle variety of typical marking an moderation flows

    It is unlikely that any system can be generic enough to handle the variety of processes used in higher education to mark student work. It may even be a struggle at the level of the institution. Therefore, institutions either have rigid processes or systems need to be flexible.

    Systems also need to be able to differntiate staff roles to identify which members of staff should see which students have or have not submitted, i.e. administrator, and which members of staff should receive anonymised student work that they will be marking/moderating, i.e. academic. That is probably an over simplification of the role differentiation and have experience of range of hybrid models to reflect local practice.

    Reply
  11. Nigel Owen

    The challenges that resonate with me are the technical infrastructure ones. I look after a small team of developers and an administrator so keep the lights on, discs spinning and software upgraded across the Learning Technology Services at Nottingham.

    Priority 2 – Reliability of submission systems.

    The main reliability issues we have our with the cloud Turntiin service that is integrated with Moodle. We have a “Test Your Text” formative assignment submission that all students can access outside of their formal assignment submission. This has a high throughput (class size of 5k students and hundreds of assignments submitted daily). There are numerous defects within the the Moodle integration. We find that students do not get their matching receipt email, matching reports do not appear, submissions take a long time to upload are common complaints from students. New integrations are made available with reasonable frequency however these seem to have scant testing (as many new defects found as fixed)! Nor are they released onto the main Turnitin download pages, they are hidden away in a obscure location.

    There is desire to move away from Turnitin to increase reliability and develop in house to add what are seen as obviously missing features. Development of a new solution would fall to my team. However I am exceptionally nervous of undertaking this given the high stakes nature of assignment submission and the very very low tolerance of software defects in this area.

    Reply
    1. Gunter Saunders

      The reliability issues of Turnitin have been a problem for us and we are seeing a measurable shift away from that system the Blackboard system simply because the managed private cloud installation is more reliable through the capability to adapt to load. We are able to inform our providers when there is going to be a peak in submissions and they respond with monitoring and where necessary greater processing resource at their end.

      Reply
  12. Nigel Owen

    Priority 1 – Handle different marking and moderation workflows.

    There are inherent assumptions hard coded into EMA systems. E.g. That each assignment has only 1 mark. Second marking of assignments is an often requested feature that appears missing. I would envisage that this would be quite complicated to design as would anticipate that this is something that could not be deployed at an institutional level. I suspect that each school/dept have their own marking workflows and schools take advantage of the flexibility doing this manually provides. I.e. If reviewer x is taking too long then marker y can review the papers from marker z’s class instead. I cannot imagine how you could provide a framework in which a user could define their workflow and then assign individuals to each of the steps!

    Reply
    1. Chris Turnock

      I agree Nigel. A good starting point would be to have a single institutional policy on the work processes associated with first marking, second marking and moderation. Most universities have a policy on timescale for returning feedback to students, but it may be these need redefining to ensure internal processes can be completed in a timely manner.
      There is a risk that developing such processes and how they are adhered to becomes very top down, with staff feeling at risk if unable to meet deadlines. Others might argue that staff are paid to deliver teaching, which includes assessment, and so they should normally meet deadlines.
      Of course technology has not suddenly made meeting marking deadlines a requirement. However, a robust system make the whole process more efficient, especially precluding need for marker to come on campus to collect marking so in principle speed things up.
      Having said there is a risk of adopting a ‘stick’ approach, the carrot might be provision of devices, e.g. iPad, to facilitate marking. It is something that is happening a a number of institutions, including some departments here at Hull.

      Reply
  13. Nigel Owen

    Priority 8 – Academic resistance to online marking.

    Having demonstrated online marking (grademark) to academics for at least 5 years I am amazed that it is not utilised more. IMHO it’s a very well designed tool to massively increase efficiency in marking. However… I have never marked a paper using it so I can’t really comment 😉

    Alot of departments at Nottingham have the administration teams driving the roll out of online marking. E.g. Space savings (no more stored paper), time and cost savings in the admin office (no more processing and paper postage).

    Reply
    1. Chris Turnock

      Some universities, either at institutional level or departmental level, implementing digital submission (where appropriate) and marking for all assessed work. My own experience of this being it needs to be carefully managed, the quality of staff support in terms is a key factor in success. My personal opinion is that the savings made from digitising the process should be used to invest in staff, i.e. support and equipment. Provision of iPads to all markers is one approach, but that assumes all staff want to use one to mark. Flexibility in choice of hardware is important.

      Another important consideration is ensuring consistency in nature of feedback type for individual assignments. For example, an essay could be annotated by some markers but not by others. Such differences in practice will be picked up by students and result in some students feeling aggrieved. Given assessment feedback is an NSS question, academic teams need to consider how they can be consistent in provision of digitised feedback for an assignment.

      Reply
  14. Gunter Saunders

    I absolutely agree with this post and have too seen many examples of where the use of technology in an assessment enlivens the experience for students and can serve to spread the assessment load across a semester. I believe it is essential for assessments to better reflect the workplace experience that students will meet after university – there is a slow but growing level of feedback from academic staff that they see this as important as well. At a recent workshop to discuss this some staff felt that their own work practices served to propagate traditional approaches to assessment.

    Reply
  15. Gunter Saunders

    This is a general comment. It feels to me as though many of the priority areas identified, not all, will benefit from changes in human behaviour as much as technology changes. None of them easy of course but different ways of thinking with respect to assessment approaches (highlighted in priorities 6 and 9), but also changing staff working practices and to some extent the predominant teaching and learning approach as well as altering student expectations – all of these human outlooks/behaviours can potentially have as significant impact on what can be achieved as technological changes.

    Reply
  16. Stefanie Anyadi

    Many of our academic colleagues are reluctant to mark online because of worries about being able to use the technology and they don’t have the time to experiment. What works for us (echoing Nigel Owen above) is department-based admin staff in consultation with learning technologists arranging small group, subject-specific, workshops and one-to-one support (available at very short notice). We do promote this by giving examples of time-savings, experiences of other academic colleagues, positive comments from students.

    Reply
    1. Stefanie Anyadi

      Priority 10 Ability to gain a longitudinal overview of student achievement
      It would be great to have an overview of how each student has done in each assessment as this would enable us to track their progress but also to spot patterns and anomalies, e.g. do students do better or worse on a particular type of assessment, do particular types of students do better with some assessment types. It would also be useful for students and tutors to be able to see all feedback for various modules in one place.
      At the moment, coursework marks and feedback are stored on our VLE at a modular level and are then entered onto our Student Information System alongside other marks, e.g. for exams.

      Reply
  17. Stefanie Anyadi

    Priority 3 Lack of interoperability between marking systems and student record systems
    Would be great if this could be solved as it introduces errors and takes a lot of time. The main issue (aside from technical issues of actually matching and transferring records) for me is that we often set up assessment records on our student record systems vaguely on purpose, e.g. a module may be assessed by a portfolio of coursework. The module coordinator then has quite a lot of flexibility to design the module content and change the exact assessment specification until the start of term, which is when we have to give the details to students. This flexibility also support creativity in the assessment and enables lecturers to design really up-to-date and research-based content. I’m not sure how integration of the two systems would be possible without a requirement to fix exact assessment details much earlier in the annual cycle.

    Reply
    1. Stefanie Anyadi

      There is another intermediate step where penalties are applied, e.g. for late submission, overlength submissions. This is completely manual for us at the moment, including checking whether extensions have been granted – easy to make mistakes.

      Reply
  18. Wilma Alexander

    Challenge 1 – ability to handle variety of UK workflows – poses issues for us because of the variety within our one institution. Undestanding not only the range of workflows (we map them where we can, but it rapidly gets very complex indeed) and also the range of units and people involved, makes a difference. Sometimes staff (with the best of intentions) engage in discussions about changing assesments, with very sound pedagogic drivers, but unless the impact on administrative and technical issues is understood, such innovations run into problems which leave all concerned frustrated. Getting all of the relevant people involved in planning any changes to assessment practice is quite successful when it can be done. We are making slow progress on this.

    Reply
  19. Wilma Alexander

    Challenge 4 – Need to develop more effective more effective student assessment literacies – we find varieties of peer assessment and review exercises can be very effective in supporting the development of this literacy but the huige variety of practice under the heading of “peer review” can be hard to support. A range of technologies (WebPA, Peermark, ePortfolios) can allow for very specific kinds of peer interaction, but lots of work has to go into understanding which is a “best fit” for a particular piece of work. Not all staff are willing to compromise or change a practive which works well in the classroom, in order to have a digital component.

    Reply
  20. Wilma Alexander

    Challenge 8 – Academic resistance to online marking. This is a very real concern, and we have found even the most willing staff have comments on the way that online systems still don’t offer the functionality required to easily cross-compare scripts, move from specific to general comments, and manage feedback and marks separately. These all go together because reluctant staff will find any one of these a major “sticking point” for adoption of online marking.

    Reply
  21. Wilma Alexander

    Challenge 1 Workflows – I keep returning to this because it seems to me to be key to the whole process of changing assessment practice. So, not just a problem because systems can’t support all the workflows, even before we get to that, the ability to fully map all the workflows, and understand which bits of activity are carried out where, is crucial. We had some success with getting teams to work together on considering this using the Viewpoints toolkit (http://wiki.ulster.ac.uk/display/VPR/Home) and I am working with schools to run workshops brining together teaching and administrative staff to map typical workflows. We are doing this together with Student systejms work on improving programme assessment information and exploring grade exchange. So there are many different strands, and parts of the University, trying to coordinate developments in this area, and therefore identify what we should be asking suppliers and in-house tech support to address.

    Reply
  22. James Trueman

    Taking issues in the workflow order:

    Priority 3 – interoperability challenges make ‘managing’ EMA systems a technical and financial burden. Whilst my own university is in a generally more ‘unique’ position (using sharepoint as a VLE), interoperability exists in all systems I am aware of. We are affected by a basic lack of connectivity between base systems, which inhibits even basic enrolment and SSO workflows. But even if some form of ‘plug-in’ etc is available – there appears to be a mismatch between the data held on the SRS and the function of the marking system – e.g. a common (but variable) assessment workflow may include first submission, second submission (resit, referral etc), and possibly even mitigation etc, all of which may include some sort of allowable ‘late’ submission for individual students at each point (extension, extenuating circumstances etc). But these basic assessment elements recorded in the SRS cannot be reflected easily in the marking system (particularly for us extensions and mitigation). Some institution allow extensions, some have late submission policies that have an effective ‘second’ deadline.

    Reply
  23. James Trueman

    Priority 9 (and within that 1) – these seem to fit together in my thinking – as I am considering issues such as multiple author submission and group assessment. Not an easily managed approach in some (granted not all) EMA systems.

    Reply
    1. James Trueman

      Can’t seem to edit my own post?

      In addition group submissions, we too are frustrated by the file types, but more so, the file size accepted by Tii.

      Some colleagues in creative arts courses would also like students to submit drafts of their work alongside the final version – this may include sources of inspiration and brainstorming notes – in order that they can see the evolutionary path that the work has taken. This is currently challenging.

      Reply
  24. James Trueman

    Priority 2 – the reliability of the EMA system is paramount. This is most obvious at peak submission points – where the degradation issues have been commonly experienced. I agree that the student facing experience has to be as smooth and stress free as possible, particularly at high stakes submission points (as it should also be for the academics etc). However, this also includes the underlying functions that the EMA system is expected to return. Any text matching service should perform as expected without unexplainable glitches. On-screen marking tools should equally operate smoothly and without idiosyncratic ‘bugs’ (such as dialogue windows advising that a submission is a draft and can be replaced – when the assignment is not set up that way). Changes to the assignment should be possible at any stage, for example in Tii anonymous marking cannot be set after submissions have started – even if this is discovered as incorrect at that point. And finally, grades and feedback that are provided should be returned to the relevant system (or at least stay where they are entered), even if the return date for them is brought forward.

    With all that said – I frequently find that a system is seen to be performing incorrectly or unreliably – because the end user has a different perception of the system’s capabilities than is designed in. Or simply does not understand how to use the system. Perhaps that is an argument to make EMA system as intuitive and flexible as possible – but even then – EMA systems cannot be all things to all men – and cannot cater for the resistant user, or those who refuse to find out ‘how it does work’ and persist in thinking ‘how it should work’ – even when the same outcome is possible.

    Reply
  25. James Trueman

    Priority 7 – we really would like to move toward a much more focussed process of developmental ‘formative’ feedback through the module – with a grade awarded at the end. This may see submissions occurring at various points – and then a final submission which is only graded.

    Equally – we would like to use a similar system as expressed in this priority – that feedback is provided and the student engages with that before receiving their grade – current the two elements are tied together.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *