How many models of marking? Still counting …

he EMA Requirements Map working group is looking at common workflows. Discussions about how many models of marking exist have been going on for a while (see earlier post How many models of marking are there?).

The key remaining question seems to be whether moderation of multiple markers’ work as opposed to moderation of a single marker actually constitutes a different model.

Point of View 1 – these are not fundamentally different models i.e. they serve exactly the same purpose and have the same outputs. Cohort size is the key parameter here that determines how the model is carried out i.e. whether by one person in each role or multiple people in each role. (indeed for each of the models the same role could have 1 or many actors).

Point of View 2 – these are different models because with multiple markers the emphasis on consistency is much greater and hence the technical requirements are different. The moderator needs to (a) view the range of marks and watch that say all fail grades or all ‘50s’ across various markers are consistent (b) the moderator needs to see a display of grades by marker and the distribution of marks by marker. Technically this is more complex to achieve than a simple case of a moderator moderating the marking of a single marker.

This is an interesting point of debate with implications for how we define an academic model. Are they variants of the same thing with different parameters or are they different in ways that are academically significant? We would like your views ASAP.

Below is a rough representation of this workflow with some of our comments (click on the image to expand it or download the pdf). Let us know what you think.

5. Marking & Feedback Model 1 Early Moderation v2

5. Marking & Feedback Model 1 Early Moderation v2

Assessment and Feedback institutional checklist

The toolkit working group will be looking at the usefulness of tools & checklists over the next few weeks so here’s an example just to get us thinking. Comments welcome

Checklist for institutional Self-Assessment

  • Strategy and policy
  • Is your institution’s approach to assessment and feedback articulated in institutional strategy and, if so, at what level institution-wide or devolved?
  • Is your assessment and feedback strategy underpinned by educational principles?
  • What guidance exists to help staff apply the principles in practice?
  • Academic practice
  • Does your institution use consistent templates for assignment briefs, assessment criteria, marking rubrics etc?
  • How and when do academic staff e.g. programme teams discuss and compare approaches to feedback?
  • Does your institution use any audit tools to analyse tutor feedback?
  • How do part time teaching staff develop an understanding of the programme wide approaches to assessment and feedback?
  • What opportunities are there for student/tutor dialogue around feedback?
  • Are academic staff able to view past feedback for a particular student?
  • Learner engagement
  • How are students inducted into assessment and feedback practices and encouraged to develop assessment literacy?
  • What mechanisms for learner self evaluation are promoted and supported?
  • What mechanisms for learner peer evaluation are promoted and supported?
  • How are learners involved in dialogue around developing assessment and feedback practice?
  • Do students have access to personalised information about assessment and/or feedback deadlines for submission and/or returned work?
  • Are students able to view all of their past feedback in one place?
  • Curriculum design
  • Does your institution have any policies relating to the number and type of assessments per module/course?
  • Are learning outcomes for specific modules and courses mapped to scheduled assessments?
  • Does your institution use any modelling tools to look at the balance of formative/summative assessment and the scheduling of assessments across modules and courses?
  • Does your institution have a full overview of its assessment activity and the implications of submission peaks on administrative support and IT systems?
  • Does your learning design implement educational principles e.g. ‘Help clarify what good performance is’ in a teacher centric or a learning centric way?
  • Employability
  • Does your institution use a range of assessment types that mirror actual working practices in the relevant discipline?
  • Is the development of student self and peer evaluative capability valued as an important attribute of employable graduates?
  • Does your institution involve employers in determining what should be assessed?
  • Processes and technologies
  • Does your institution have a policy on online submission and/or feedback/marking?
  • Does your institution have consistent processes relating to the submission, marking and return of assessed work?
  • Do staff have ready access to information about available technologies and ways in which they can support assessment and feedback practice?
  • Are students able to employ an appropriate range of technologies in producing assessed work?

A brief typology of feedback hubs

As part of the Jisc’s feedback hub feasibility project, I’ve started to look at systems that can present feedback in a holistic, programme wide way to learners and teachers. Having talked to the creators of a number of these hubs, there seem to be three broad types, which I’ll outline here.

One of the outcomes of Jisc’s ongoing Electronic Management of Assessment programme is the sector’s desire for a feedback hub. This is a separate application, system or service that provides a wide, degree programme or even life-wide view of the feedback a student has received on assignments over the course of their learning journey. With such a hub, learner can better track their progress, see in what areas they’ve become better and what aspects still need work. Their tutors and teachers can use the same view to feed forward, or identify areas where some action now could make a difference.

The feedback hub feasibility project is a small piece of work that follows up on the desire for a feedback hub by looking at what systems are already available, how they meet –or could be made to meet– what people want from them, and how Jisc can help make these systems available.

Research is ongoing, but from the systems we’ve examined so far, the fundamental differences between them are where they live, and whether they’re a feature of a larger system or a stand alone application. This has consequences for how easily a particular hub can gather feedback data, how people access the hub, and whether that hub can present feedback alongside other progress indicators such as analytics dashboards and confirmed marks.

VLE feature

These are the simplest feedback hubs: they are an integral part of a larger VLE, and they live in the same place as it does: either with a hosting company or on an institution’s servers.
They work by gathering links to all assignments and their associated feedback from the VLE’s own database, and sometimes from assessment services such as TurnItIn as well. The clear advantages of this approach is that it is comparatively easy to get comprehensive access to assignments kept in the VLE, and it also means that learners and teachers can interact with the hub in a familiar environment. Integrating a feedback hub view with other progress indicators from the same VLE should be easy to develop too.

Getting detailed feedback and grading data out of other systems could be a challenge, however, and students who live with more than one VLE for one reason or another would almost certainly struggle to get a unified view. Most learners’ feedback ‘story’ would also be unlikely to be portable after graduation.

Examples of feedback hubs of this type are the MyFeedback Moodle plug-in currently under development at the UCL. This plug-in builds on earlier work at the UCL Institute of Education that provided a simple chronological list for teachers, by making a view available to students as well, and integrating it with basic learning analytics indicators.

Bedford college has the Grade Tracker plug-in for Moodle, which has extensive support for tracking and even projecting grades, and is designed to work with the qualification structures of the main FE awarding bodies.

The University of York is working with Blackboard to develop something similar to MyFeedback for the Blackboard VLE. Because the work is just starting, and the development approach agile, the work at York is focusing on providing just a chronological list of assignments and feedback, and add other features when there’s a demand for them.

Tutoring/Assignment management system

These systems offer a feedback hub as well as other features designed to give an integrated view of assessment schedules, grades, tutor meetings and some learning analytics data visualisations. They are stand alone systems, and the ones available now are designed to either live on campus, or be hosted by a dedicated hosting company.

The advantage of systems of this sort is that they can focus exclusively on a well established feedback process: the personal tutor meeting. The system can give a busy academic all the progress indicators as well as private and shared notes and feedback. The same goes for the student preparing to meet her.

The challenge for a tool of this sort is system integration: a way needs to be found to get assignment and grade data out of VLEs, Student Record Systems and assignment services. This can be difficult.

An example of a well established tutoring assignment system is coTutor from the University of Loughborough, which is now available as a hosted application from the Jisc cloud service. Another example is the e-Assignments system from the University of Southampton.

Assignment service

These systems live on the internet, and are designed to integrate with other institutional systems to some extent. The advantages of such services are the same as those of all software-as-a-services: economies of scale can make them full featured for a relatively low price, upkeep is left to dedicated teams, and improvements are continuous. For feedback, that particularly means that access to the hub can be taken to whichever web platform or mobile app learners and teachers prefer, either via widgets or other front end integrations. For the learner, it also means that feedback can be gathered from several places and stay accessible after graduation.

Though the economies of scale help, integration with other systems, particularly at the back end, can remain challenging for these services. Also, although the widgets can help blend an assignment service into a VLE, it will still be another separate tool that user have to learn to use, and one that may not be able to mix in different progression data quite as seamlessly as a VLE plugin.

An example of this approach is PebblePad’s forthcoming Flourish product, which presents a student’s learning journey as a continuous stream- much like the activity stream in social media. Entries in that stream include assignments with their mark and feedback, and each such entry can support interaction with the tutor or marker. A version of each student’s stream can be made visible to the tutor.

Another possible example is to build a new app on top of box– an enterprise content management service that resembles dropbox in some ways, but one that allows institutional control. Box has embeddable viewers for common document formats, and those viewers support annotation. This suggests that it should be possible to construct an assignment service that can be embedded in any relevant web application or in dedicated mobile apps. Some school oriented box based services already get close to that idea, and the Canvas VLE already has box built in and uses it for assignment annotations.

Turnitin could also be called an assignment service, of course, but one that is more oriented towards plagiarism detection than the provision of holistic feedback hubs at the moment. This could change, however.

Next steps

These types of feedback hub give an idea about how assignments and feedback data can be aggregated and exposed to users. The next step will be to delve deeper into how these systems relate to existing and desired feedback practice, and what gaps there are between what learners and teachers want, and what the systems can provide. That, in turn, will inform what courses of action Jisc could take in the feedback hub area. Stay tuned.

 

Many thanks to all my feedback hub respondents.

Views are my own.

European EMA workshop – programme details

EUNIS LogoThe programme for the workshop with European partners is now published and there are still some free places available.

EUNIS E-Learning Task Force Workshop

Abertay University, Dundee, Scotland, 9th June 2015, Events Area, Kydd Building

Electronic Management of Assessment and Assessment Analytics

The workshop will explore how technology can be used to enhance assessment and feedback activities and streamline associated administration and how we might make use of assessment data to improve learning. As a participant you will have the opportunity to:

  • Find out about new developments across Europe
  • Share practice and ideas
  • Hear about new developments from relevant system suppliers
  • Take away tools and techniques that can be used in your own context
  • Contribute to some ongoing European projects

The event is free and places are limited: to reserve your place fill in the online booking form as soon as possible.

Outline Programme

09.30 – Arrival and registration

10.00 – Welcome and introductions

10.05 – EMA: a lifecycle approach (Gill Ferrell, Jisc UK)

10.25 – Digital Assessment: a view from northern Europe (Freddy Barstad, UNINETT eCampus, Norway)

10.45 – Assessing online learning: the TALOE project (Tona Radobolja, University of Zagreb, Croatia)                 including demonstration of the free tool.

11.30 – Hot topics: system suppliers present their views on important trends                 5 x 7 mins presentations then delegates select which suppliers they want to join for 2 x 20 minute           discussion sessions.

12.45 – Lunch & opportunity to visit supplier exhibition stands

13.30 – Assessment for learning (Lisa Gray, Jisc UK)

13.50 – Transforming assessment and feedback (Rola Ajjawi, University of Dundee)

14.10 – Student engagement in assessment practice (Nora Mogey, University of Edinburgh)

14.30 – Discussion groups

15.00 – Assessment Analytics (Adam Cooper, LACE project)                 Overview of project; practicalities and ethics of assessment analytics;                 discussion on where data analysis can play a role in the assessment lifecycle.

16.00 – Close of workshop

16.05-17.00 – ELTF members showcase/meeting (all participants welcome to stay)

All logos

Solution development is under way: can you help?

First a potted history for those new to this blog – Jisc has been working with the community to:

As a result of your participation and feedback 3 projects will be getting under way after the Easter break. All projects are due to complete by September 2015. Some of you have already expressed initial interest in being part of these projects. Now that we have the work more clearly scoped we would be grateful if you would use the expression of interest form linked below to tell us exactly how you would like to be involved. Some aspects of the work will require relatively small working groups but we will endeavour to maximise participation in review and feedback activities. Most of the activity will take place online but there will be a few face-to-face activities.

Expression of interest form: https://www.surveymonkey.com/s/95VGTHD

Project 1: EMA Requirements Map

Contact person: Gill Ferrell gill@aspire-edu.org

The aim of this project is to review the workflows associated with the assessment and feedback lifecycle (particularly those around marking and feedback) and produce a set of visualisations that describe the main academic practices and the key variables that influence decisions. By reducing the ‘noise’ around differences and focusing on what is pedagogically significant the project hopes to provide institutions with a means to review and improve processes and help system suppliers better support common UK practices. It is hoped that institutions that have already undertaken process review in this space may find opportunities to replace frustrating ‘workarounds’ with better solutions.

The project will:

    • Identify, validate, specify and present (at a national level) a high level set of assessment and feedback workflows that represent the diversity of practices across the sector.
    • Develop a more detailed process layer under each high level workflow and validate these with a range of sector representatives.
    • Prototype a range of ways of viewing these processes, validate with users and develop and test an interactive, online tool to present these workflows and enable engagement with them.
    • Map individual systems to workflows, to identify which aspects they support, and any gaps.
  • Identify any anomalies to the identified workflows and explore how to deal with these in terms of the scope of this work.

 

Project 2: Feedback Hub feasibility study

Contact person: Wilbert Kraan w.g.kraan@ovod.net

The aim of this project is to deliver a study exploring the potential development of a Jisc-funded tool that would deliver an aggregated view of feedback and marks, with both tutor and student views. By examining some of the pedagogic, technical and process factors involved in implementing a feedback hub, it will inform the business case, identify the options for taking this work forward, with the cost/benefits discussed for each, and recommend the way forward that would offer most value to the UK HE and FE sectors.

One of the widely supported solution ideas was the development of a system independent, service / application / plug-in for aggregating, organising and presenting feedback (and marks). A second requirement was that it also provides opportunities for interaction about feedback and marks between staff and students. Before we can take the development of this tool forward, further analysis is needed into the business case to ensure that whatever Jisc does adds most value to as many of our customers as possible.

The study will explore:

    • The challenges, opportunities, benefits and risks of Jisc assuming a role in feedback hub provision.
    • User stories around feedback and marks aggregation
    • A review of existing systems and business models.
    • Ethical/policy implications.
  • Recommendations on where Jisc can add most value in this space.

 

Project 3: EMA Toolkit

Contact person: Gill Ferrell gill@aspire-edu.org

The aim of this project is to deliver an interactive online toolkit, based around the assessment and feedback lifecycle, that will provide examples of effective practice at each stage of the lifecycle. The toolkit will be written in an action-oriented way, to enable response and action by the institutions involved and will include resources such as: tools; case studies; shorter vignettes of good practice; policies and processes; information on technologies and integrations.

The project will:

    • Engage a working group of representatives from the sector to ensure that the design, themes and examples meet sector requirements throughout the development of the toolkit.
    • Identify resources and seek community involvement to fill gaps.
    • Explore the requirements for the toolkit from a technical perspective and work with specialists to create the final output.
  • Commission and support some projects to pilot the resources and revisit the toolkit in the light of the lessons learned from the pilots.

 

Other work

Another of the solution areas explored was improving the reliability of assignment submissions. At present Jisc will only be taking forward the policy, procedure and guidance aspects of the proposal and this will be included in the EMA toolkit. Contact Gill Ferrell if you wish to contribute ideas or resources.

The final solution area was EMA systems integration. At present Jisc is looking for examples of good and sharable practice that can be integrated into the EMA toolkit. Contact Wilbert Kraan if you wish to contribute ideas or resources.

Electronic Management of Assessment and Assessment Analytics Workshop

EUNIS E-Learning Task Force WorkshopEUNIS Logo

University of Abertay, Dundee, Scotland, 9th June 2015, 10.00-16.00 

The workshop will explore how technology can be used to enhance assessment and feedback activities and streamline associated administration and how we might make use of assessment data to improve learning.

Across Europe universites are struggling with similar issues of bringing assessment and feedback practice up to date and meeting student and staff expectations:

UK (Jisc 2014)‘We’re becoming increasingly used to dealing with the routine business of our daily lives online – from paying bills to buying groceries – so we might imagine that the days of students trudging to campus to hand in assignments, or trying to decipher a tutor’s scrawled comments are long gone. It seems similarly anachronistic that tutors carry around back-breaking piles of essays to mark by hand, which will then need to be manually entered into IT systems and go through multiple rounds of scrutiny to weed out input errors. Norway (National Task Force 2014)‘The digital “natives” that attend higher education institutions in Norway today study and learn with their laptops/mobiles but when they turn up for their long awaited examination, they are, in their words, brought back to the stone age, and required to reproduce their knowledge with pen and paper. Academics eagerly wait for the students’ papers to arrive in snail mail and spend their time interpreting the illegible hand writing while administrators spend their days counting, copying, double-checking, packing, sending examination papers from student to examiner.’

We will look at some national initiatives exploring the workflows associated with assessment from a range of perspectives including pedagogy, business processes, supporting systems, legal and organisational issues. We will hear from projects offering practical solutions to common problems. There will be an opportunity to talk about current developments with system suppliers. We will investigate how the use of data from assessment processes in various forms of ‘assessment analytics’ might better support learning and teaching practice.

We will be hearing from:

  • Jisc Electronic Management of Assessment (EMA) project, UK
  • National project on Digital Assessment, Norway
  • TALOE project pedagogic toolkit (europe-wide represented by project partner University of Zagreb)
  • LACE project: Learning Analytics Community exchange (europe-wide)

The day will be highly interactive and will offer something of interest whether your role is academic, administrative or related to supporting IT systems. As a participant you will have the opportunity to:

  • Find out about new developments across Europe
  • Share practice and ideas
  • Discuss new developments with relevant system suppliers
  • Take away tools and techniques that can be used in your own context
  • Contribute to some ongoing European projects

Places are limited: to reserve your place fill in the online booking form as soon as possible.

This event is delivered in partnership with:

Logos

 

EMA at Queen’s University Belfast

Background and context

Queen’s University Belfast (QUB) is a broad-based research-intensive institution with 20 Schools, 11 Institutes, 2 University Colleges and 8 Directorates. The student body is primarily full time undergraduates from Northern Ireland. This case study looks at work undertaken in the period 2011 to 2014 as part of the Jisc assessment and feedback programme and particularly the e – Assessment and Feedback for Effective Course Transformation (e-AFFECT) project.

The key drivers for the e-AFFECT project were:

  • The wish to build upon existing good practice developed with the support of the Higher Education Academy Enhancement Academy to enhance the student and staff experience of assessment and feedback.
  • The need to develop an effective institution-wide framework for the management of strategic change.
  • A desire to address a lack of consistency in assessment and feedback practice across the University as evidenced in external (NSS) and internal student surveys.
  • To extend the use of technology already supported by the University to support assessment and feedback.
  • To support student attainment and retention in the University.

The project worked across a number of different Schools and began with an examination of the ‘baseline’ study of practice in each. Examination of these baselines revealed huge variation in the timing of assessment and feedback and the ways in which feedback was provided on coursework and exams in different Schools. The approach taken was one of Appreciative Inquiry with the review of current practice being strictly non-judgemental and a collaborative approach to identifying good practice to build on going forward.

EMA implementation

The appropriate use of technology played an important part in this work and QUB ensured that pedagogy was always driving technology adoption rather than the other way round. The project set out to identify effective and efficient practices in assessment and feedback for learning across the institution, with a particular emphasis on the role of technology in enhancing these and to build capacity in use of assessment and feedback technologies.

It was considered important to ensure that the institution was making best use of the technologies it already had at its disposal before seeking to invest in, and support, new technologies. The technologies available at the beginning of the project were:

  • Queen’s Online VLE (SharePoint) used for – for e-submission/marking/uploading feedback and discussion forums and wikis.
  • Questionmark Perception (QMP v5.7).
  • Personal Response Systems (TurningPoint).
  • Turnitin UK used for – originality checking; during the life of the project the license was extended to include PeerMark for peer review and GradeMark.
  • MS Office used for – marking and feedback.
  • WordPress used for – student blogs.

As a result of the enhancements to practice QUB now also uses a range of additional tools that are not supported by their Information Services department:

  • WebPA
  • VoiceThread
  • Jing
  • Audacity
  • PeerWise

QUB has undertaken interventions in many parts of the assessment and feedback lifecycle. In the section on ‘specifying’ we look at the way in which an agreed set of educational principles have influenced assessment and feedback across the institution. In the other sections we look at how particular Schools have focused on particular elements of the life-cycle. In all 14 programme teams were involved in the project directly involving 255 academic staff, 19 administrators and almost 4,500 students.

  1. Specifying

Earlier work had already examined principles for good assessment and feedback practice – it was felt that, of the principles most commonly espoused in current literature, it would be best to focus on no more than about seven that QUB considered to be the most important. A conceptual model for the use of these principles was developed with the underlying rationale that all assessment and feedback activities should encourage positive motivational beliefs and self-esteem. This model formed the basis of designing project interventions to enhance practice and also now underpins assessment design at programme level.

QUB principles

To facilitate and engender dialogue with and by programme teams around the educational principles eight cards were developed that set out the headline, the narrative behind it, suggested ways of accomplishing the principle and different technologies that might be used. The principles cards are available for others to download and use along with a set of Technology cards. The technology cards are themed into their functionality and each card provides information on:

  1. The type of technology
  2. Technology requirements – eg. license, permissions, download
  3. Benefits to students and staff in using the technology
  4. Tips for using the technology (where these have been gleaned)
  5. Implementation considerations
  6. Key features set out for easy comparison
  7. Accessibility considerations

There is also an Action Plan template for programme teams. designed to capture where, when and how an activity would take place in the programme. It also captures whether training for staff and/or students is required and any potential barriers to the completion of the proposed action.

  1. Setting:

As well as the implications for the broad area of curriculum design, application of the principles has important implications for the setting of individual assignments for a specific instance of delivery.

Another widespread activity across QUB was the use of the assessment timelines tool from the University of Hertfordshire’s Jisc-funded ESCAPE project to map the assessment and feedback landscape in order to help with the process of setting the assignments (see our case study on University of Hertfordshire). The timelines were particularly useful in facilitating discussions and action planning. These brought together information from module descriptions and other School data and presented a clear visual summary of the schedule and type of assessments, formative or summative; high stakes or medium stakes, facing students throughout the academic year. This approach has now been built into a Continuing Professional Development event for programme/course teams reviewing or preparing new degree programmes.

A number of different Schools have come up with ways of helping students understand the process of making evaluative judgements on an assignment by engaging them with assessment criteria and standards:

  • staff from the Centre for Biomedical Sciences Education produced a matrix of assessment, content and feedback opportunities across the programmes to identify patterns and demonstrate to students how the programme of work fits across the three years;
  • staff in Midwifery significantly developed the use of assessment criteria as a means of enabling their students to understand what was required and as a basis for the provision of feedback: assessment criteria (instead of guidelines) are provided for essays to make assessment more transparent to students and marking easier for staff; assessment criteria are used in feedback; an assessment rubric has been created using level descriptors; referencing is standardised and penalties defined (a guide for students has been developed by students); a review of the timing of feedback has led to the publication of dates for students and externals; generic feedback on common mistakes has been compiled into in a bank and students are able to post questions on a discussion forum.
  • staff in Civil Engineering developed workshop materials for students on the criteria and standards for reports;
  • staff in Social Work carried out a review of module content and assessment, mapping the content, skills and assessment of the programme for staff and students;
  • staff in Law circulated exemplars of past work in an effort to engage students with standards and assessment criteria.
  1. Supporting:

The increased clarity about timescales, criteria and standards has benefited students in many Schools across the institution. As well as developing the workshop materials for students on the criteria and standards for reports described above, staff in Civil Engineering also supported students via:

  • on screen provision of feedback to students on a draft graph for coursework
  • on screen provision of feedback to students on a draft flownet for coursework

There was a significant difference between the mean module marks for 2012-13 compared to 2011-12 and a shift in the mark distribution with proportionately fewer fails and third class marks and more first class marks following the interventions. Detailed analysis of student marks over 2 years also confirmed that students who participated in the support activities were less likely to make errors in their final submission.

The programme team for Environmental Planning was particularly interested in developing their students’ feedback literacy and agreed that there should be workshops for students at all levels using exemplars and marking exercises as well as the use of VoiceThread to create tutorial and support materials. Interventions included:

  • Facilitated workshops on assessment and feedback: as a part of the assessment for the module students were required to indicate how they had used the feedback from the first assignment in the next.
  • Jing was used to provide screencasts to support subject specific skills development and to provide formative feedback on students’ design plans.
  • Four VoiceThread tutorial resources were developed based around 4 themes with questions for the students to answer. The aim was to encourage year 1 students to express an opinion to the question posed and to then discuss this effectively with their peers. Tutors provided feedback in VoiceThread on the students’ responses.

In the School of Law the action plan included student-led sessions on feedback and time management. Skills are now mapped throughout the degree programme in an effort to highlight where students have opportunities to be taught, to practise and to be assessed in the identified skills. In an effort to engage students with course material throughout the year, ten online ‘take home’ class tests will be developed using QuestionMark Perception: students must take and pass seven.

  1. Submitting:

QUB has an in-house EMA system used for e-submission: Queen’s Online assignment tool. Whilst policy is made locally, an increasing number of Schools are beginning to mandate e-submission following successful pilots and positive reports from other parts of the University.

Business Management decided to proceed with a trial of the Queen’s Online assignment tool as part of the e-AFFECT project. The trial covered 298 students on two campuses and identified the following advantages:

  • the same submission procedures could be followed by students at both campuses;
  • it was easy to upload and deliver feedback for students – the need for multiple individual emails was eliminated;
  • it was much easier to monitor submission times with e-submission and it was possible to monitor when students viewed their feedback;
  • part-time students did not need to take time off work to submit assignments.

The School of English took use of the EMA tool even further and established e-submission, e-marking and e-feedback for all coursework. An unexpected outcome has been the realisation of how powerful the experience of one School can be in influencing others. Two further Schools, Creative Arts and Education, have now adopted e-submission, e-marking and e-feedback as a result of the positive experiences in the School of English.

  1. Marking and production of feedback:

The School of Psychology delivered all of the following elements of its action plan: new guidelines for feedback on dissertations, an inventory of writing skills, new feedback sheets incorporating the University descriptors and an acceptance that staff should be exposed to each other’s feedback. Following a Review of Feedback workshop, other initiatives include attempts to standardise feedback across markers, sharing of good practice, the introduction of tutorial exercises designed to help students interpret feedback, the use of the comments function only on documents rather than track changes, the introduction of a new moderation policy to include a view of feedback provided to students and a change to feedback sheets where staff highlight the single most important aspect to consider for the next assignment.

The QUB Feedback review template is available to download and can be used to initiate discussion among staff around the consistency and quantity of assessment.

Biomedical Sciences is taking steps to ensure feedback is readily comprehensible to learners by using summer studentships to enable students to collaborate with staff in the creation of feedback comment banks to be used with GradeMark and PeerMark.

  1. Recording grades:

Although related to storage of assignment data rather than actual recording of grades, an initiative worth mentioning here is the creation of an online repository (Vimeo Business) for student films created for work on Film Studies. The online repository overcame the problem of file size limits in the University’s VLE and meant that issues of submission, archiving and access for External Examiners were overcome.

  1. Returning marks and feedback:

Audio feedback was trialled in Film Studies and received a very positive response from students who requested further use of this approach. Biomedical Sciences is planning to deliver audio feedback using Jing.

Environmental Planning used Acrobat Pro to provide feedback annotation on assignments in a second year design module. Students could access this feedback on their computers, smartphones or tablets and analysis of the module marks in 2012-13 and 2013-14 demonstrated an upward shift in the profile of marks.

Biomedical Sciences has introduced a ‘marks breakdown’ on exams for each student with ranking, some statistics and a paragraph from the Module Convenor. In future additional similar information on coursework performance will be added.

  1. Reflecting

QUB has produced some detailed staff and student questionnaires on assessment and feedback. These were originally developed to gain insight into experiences and perceptions of assessment and feedback and the technologies used in order to provide the baseline for enhancement. They are however equally useful as reflective tools to stimulate thinking about individual practice and approaches.

Students’ capacity to reflect and make evaluative judgements has been supported at QUB by the use of peer-review techniques in a number of subject areas. The School of Computer Science used PeerMark (part of the Turnitin suite) to enable students to peer and self-review final project submissions in order to develop their skills in critically evaluating their own work and the work of others.

Benefits

QUB has seen many benefits from the use of EMA across the assessment and feedback lifecycle.

  • The overall approach of using Appreciative Inquiry and supporting teaching staff to plan and implement specific interventions has proven to be an effective means of introducing positive change and is an approach that the University intends to apply to other initiatives.
  • There have been savings in staff time as a result of introducing e-submission, e-marking and e-feedback e.g. the School of English calculated it had saved 20 days of administrative staff time.
  • Feedback is being delivered in time to have a positive impact on learning habits. This is particularly noticeable in subjects such as phonetics.
  • Student attainment has improved in many of the areas where interventions have been undertaken e.g. one Linguistics module has seen a 4% increase in the mean student mark since the introduction of online formative feedback opportunities; Civil Engineering has seen an increase in the mean mark and proportionately fewer fails and third class marks since introducing a range of formative support activities.
  • External Examiners have responded positively to the convenience of being able to access material well in advance of exam boards.
  • Student satisfaction with assessment and feedback has increased year on year since 2012 in both internal and external surveys.

Find out more:

  • The e-AFFECT project has produced a wide range of resources and reports available from the Jisc Design Studio.

Have your say!

At our workshop in December some of you worked with us to develop a range of solution ideas to tackle the prioritised challenges, which were refined down into five concept areas. At our second workshop in January more of you collaborated with suppliers to work up each of these further. We would now like to share these ideas with you here so that you can vote and comment on which you feel would have most impact for you, and the sector, if delivered. Your comments will feed into the process of deciding which ideas will be taken forward.

Please add your comments to this post, along with the number of the idea that you feel would have most impact if taken forward!

1. EMA Requirements Map
To address the challenge that systems don’t always fully support the variety of marking and moderation workflows in place: this is a project to identify, validate and specify the sector’s key EMA requirements and workflows. The aim here is to provide clarity and transparency around assessment and feedback workflows (looking at the whole assessment and feedback lifecycle, but particularly around the period from submission to return of grades). This will help assessment systems suppliers better design systems that support good pedagogic practice as well as helping institutions review their own practice. The project would seek to identify common workflows and significant variables in collaboration with universities; consolidate and further analyse the workflows; develop a visual way of presenting workflows; map current systems to these workflows; and engage with suppliers and developers to fill the gaps.

2. Feedback hub
The development of a system independent, virtual tool/plug in for aggregating, organising and presenting feedback (and marks) at a programme level, wherever they may sit, for both staff and students. The tool should also enable interaction around the feedback between staff and students.

Students would benefit from an aggregated view of their feedback to support self-reflection on progress; lecturers would see a more holistic view of students’ progress and be able to better understand an individual’s progression and better identify where intervention or support was needed. This holistic view could also enable more effective and efficient tutorial and supervisory processes.

3. Reliable submissions
To tackle the documented problems associated with system failures at critical submission points it was suggested there is a need to decouple the physical act of submission from the workflows within other EMA systems. This solution idea proposes the development of a submission tool (customisable by institutions) which includes a front-end asynchronous submission and receipting service, with back-end post-submission processing, so that submissions can be acknowledged and held until other functions are in a position to proceed. Policies, procedures, guidance and examples need to encompass the workarounds to deal with points of failure.

4. EMA systems integration web resource
To address the problems of the lack of interoperability between marking systems and student records systems and subsequent need for ‘workarounds’ by staff, a resource was proposed which would help institutions find solutions to EMA systems integrations issues, which could relate to both workflows (see above ‘EMA requirements map’) as well as actual existing integrations (use cases including advantages, limitations etc. and code). Where there are gaps identified through exploring existing integrations, this resource would enable them to be surfaced and prioritised for potential development. A ‘community of practice’ would support the resource.

5. Assessment and feedback toolkit
A web-based ‘toolkit’ of searchable resources (case studies, stories, staff development resources, tools etc.) based around the assessment and feedback lifecycle in an interactive form. The resource would aim to provide examples of solutions to assessment and feedback problems, enabled by technology, based on pedagogy and underpinned by research, and be able to be re-purposed for local contexts. It would aim to address the question of ‘what does good assessment design look like?’ and to enhance the assessment literacies of staff and students.

From challenges to solutions

IMG_3324 web size

On 9th December we held a workshop involving many of the contributors to this blog along with other staff from HEIs to take a service design approach to addressing the EMA challenges you have identified. It was a lively and participative day that resulted in 30 solution ideas. The ideas were discussed and voted on by participants in order to identify those that had most potential to provide real benefit to a significant number of stakeholders. We will be holding a further workshop on 22nd of January to further refine the solution ideas and come up with an action plan for taking them forward.

So far the ideas gaining the most support have grouped around five main themes. What is interesting is that between them they address all of the top 10 prioritised challenges and also that there is clearly no one size fits all approach to some of these problems – many of the 30 ideas represented different ways of tackling the same problem. Here is a summary of the most popular ideas to date:

Solution Group 1. – Common Workflows

Challenge/s addressed: Ability to handle variety of typical UK marking and moderation workflows/ Ability to manage marks and feedback separately.

We had a range of ideas around identifying, validating, specifying and gaining consensus around a common set of marking and moderation workflows. There was considerable interest in research already done by the University of Manchester into the range of workflows that exist across different disciplines and enthusiasm for the idea of validating these further across the sector. If we are able to narrow down the diversity of approaches into a set of common models it could help to both inform systems suppliers to influence how systems develop to support those workflows; and also to inform new systems development.

The ideas ranged from simply documenting these workflows in broad terms through turning them into more detailed specifications, to the idea of actually building ‘plug and play’ modules. We were also reminded of an existing open source tool built by the University of Southampton that was designed to deal with many of these workflow scenarios.

Solution Group 2. Holistic Feedback Hub

Challenge/s addressed: Student engagement with feedback/ Ability to gain longitudinal overview of student achievement.

There was consensus around the need for a more programme level/holistic view of feedback, for both tutors and students, to enable a more longitudinal view of student development as well as potentially facilitating greater engagement with feedback. One proposed solution was to develop a ‘holistic feedback hub’, where students and staff can access a programme level view of student feedback (it was noted that the IoE has already developed a tool in Moodle to do this). Another idea was for students to be empowered and enabled to take more ownership of pulling together a programme level view of their feedback by gathering this in their personal spaces (such as an e-portfolio). Ongoing conversations around feedback can be captured in the e-portfolio as part of ongoing engagement with feedback. Students could be encouraged to share their views of feedback with personal tutors in preparation for discussions during their tutorials.

Solution Group 3. Reliable Submission

Challenge/s addressed: Reliability of submission systems.

The ideas in this space focused around making the technical process of submission as simple as possible and clarifying policies and procedures to avoid stress and confusion when things inevitably do go wrong. It was suggested there is a need to analyse all of the possible points of failure and decouple the physical act of submission from the workflows within other EMA systems so that submissions can be acknowledged and held until other functions are in a position to proceed. Policies, procedures, guidance and examples need to encompass the workarounds to deal with points of failure.

Solution Group 4. Interoperability

Challenge/s addressed: Lack of interoperability between marking systems and student records systems/ Ability of systems to support variety of grading schemes.

The ideas relating to this topic covered both data management and technical interoperability. It was suggested there was a need to identify the minimum data storage requirement for each type of system and to consider whether each institution is carrying out functions in the most appropriate system and storing the data in the most appropriate place. There is a need to exchange good practice and existing solutions for common integrations and it was suggested we could go so far as to build some integrations where there are gaps.

Solution Group 5. Good Practice Toolkit

Challenge/s addressed: Need to develop more effective student assessment literacies/ Risk aversion/ Academic resistance to online marking/ Need for greater creativity.

A number of solution ideas relate to the development of guidance and examples to promote an ‘assessment for learning’ rather than ‘of learning’ approach. The suggestion is for some form of toolkit which should address the question ‘what does good assessment design look like?’ and enhance both staff and student assessment literacies. Suggestions for content relate to programme level assessment design, principle led approaches to assessment design, encouraging more dialogue on, and engagement with, feedback; mapping the programme to provide a holistic view; looking at how to quality assure feedback and visioning far-reaching assessment design possibilities.