The Electronic Management of Assessment (EMA) project is a Jisc project which is working with partners (the Heads of e-Learning Forum (HeLF) and the Universities and Colleges Information Systems Association (UCISA) to support institutions with the electronic management of assessment. The term EMA is increasingly being used to describe the way that technology can be used to support the management of the assessment and feedback lifecycle, including the electronic submission of assignments, e-marking and e-feedback. This project seeks to maximise the benefits technology can offer.
Our latest blog post is a guest contribution by Stuart Allan who has just completed an MSc in Digital Education at the University of Edinburgh. Stuart has been undertaking research into online exams. This is a short reflection on his research and Stuart will be joining us for a webinar on the topic in September so watch this space for further details.
I’ve been interested in exams and how they might relate to learning ever since my undergraduate days, when my degree was decided by nine finals in the space of ten days. (I still have nightmares about them…)
So when I was thinking of a topic for my (MSc in digital education) dissertation research, I wondered how useful digital technologies might be in final, high-stakes exams. As I read more, I discovered that the published literature on the specifically educational (as opposed to administrative or technical) implications of online exams was actually very small. (Myyry and Joutsenvirta (2015) is an interesting place to start.)
I found that the use of online exams often follows one of two main approaches:
- migration (transposing traditional exams to digital environments in order to achieve organisational gains, e.g. improved efficiency), and
- transformation (using digital technologies as a catalyst to redefine summative assessment and align it with the perceived needs of contemporary students).
My main focus was on how the migration and transformation approaches translated into educational practice in particular contexts. I interviewed eight higher-education staff involved in designing, developing and delivering online exams across four countries. They talked at length about their experiences, beliefs, aspirations and frustrations.
Instead of finding one approach to be better than the other, I concluded that both the migration and transformation approaches had significant shortcomings. The migration view seems to assume that online exam environments are instruments that we can use to achieve pre-ordained aims (such as improved efficiency); however, in my interviews I found examples of technologies interacting with, and having significant implications for, educational practice. The sociomaterial perspective was very useful here (see Bayne 2015 and Hannon 2013).
I also found the transformation view to be problematic in its own ways. For instance I began to question the validity of claims that online exams are a logical response to society’s changing needs, and to suggest that a more detailed understanding of the ways in which online exams might be qualitatively different to traditional exams is required.
Moreover, I discovered a potentially hazardous assumption that traditional exams could be migrated online (or be ‘a little bit digitalised’, to borrow one interviewee’s expression) as a prelude to more ambitious and educationally motivated changes further down the line. This transition appears not to be as straightforward as some might believe, and the migration stage often requires practitioners to overcome challenges that are unexpectedly time-consuming and financially draining.
One of the things I found most interesting was the apparent strength of some university professionals’ conviction that online exams must comply with exactly the same conditions – in terms of invigilation, the types of questions asked and candidates’ access to course materials, notes etc – as traditional pen-and-paper tests. To a large extent these assumptions set the tone for how the participants in my research used online exams.
With this in mind, I produced a number of questions that practitioners working with online exams might wish to consider:
- In your institution, what motivations exist for pursuing online exams, understood particularly in terms of how educational goals are defined at institutional and programme-specific levels?
- What assumptions are being made about what is meant by an ‘online exam’ within your context, and what can be done to support a constructive dialogue around these?
- To what extent does the dialogue between educational practice and the material contexts of particular digital environments result in online exams that are qualitatively different from traditional tests? For example, do online exams actively support, alter or proscribe particular types of student responses?
- In what ways might online exams be used to support increased assessment authenticity, in terms of both the context and content of examination tasks?
Lastly, I’d argue that the term ‘online exam’ itself – and all the assumptions about technology, education and assessment that seem to underpin it – might constrain the potential for developing practice to an unacceptable degree (see Gillespie 2010). Do we need to invent a new term to describe the summative assessment activities of the future? If so, what might that term be?
Bayne S. (2015) ‘What’s the matter with “technology-enhanced learning”?’, Learning, Media and Technology, 40 (1), pp. 5–20.
Gillespie T. (2010) ‘The politics of “platforms”’, New Media and Society, 12 (3), pp. 347–364.
Hannon J. (2013) ‘Incommensurate practices: sociomaterial entanglements of learning technology implementation’, Journal of Computer Assisted Learning, 29, pp. 168–178.
Myyry L. and Joutsenvirta T. (2015) ‘Open-book, open-web online examinations: developing examination practices to support university students’ learning and self-efficacy’, Active Learning in Higher Education, 16 (2), pp. 119–132.
Link to dissertation abstract: https://stuartallanblog.wordpress.com/dissertation-abstract/
Find me on Twitter: https://twitter.com/GeeWhizzTime
We’re pleased to announce the launch of a new guide on technology-enhanced assessment and feedback which has a focus on FE and skills. This new guide complements our existing suite of resources on technology in assessment and feedback, providing an up-to-date snapshot of practice across this diverse sector. The guide contains a rich variety of case studies and includes a podcast by Jayne Holt, assistant principal of Walsall College, on integrating college systems to track learners’ progress.
The message from the guide is clear: organisations, teachers, trainers and learners can all benefit from a technology enhanced approach to assessment and feedback. However, our e-assessment survey report (May 2016) reveals that some are yet to appreciate the full benefits. The overall picture, particularly for tracking systems and e-portfolios, is mixed, as is the ability of organisations to integrate their various technologies to maximise potential efficiency gains. For this reason, our latest guide to assessment and feedback focuses specifically on the needs of the FE and skills sector, with our assessment and feedback lifecycle model as its starting point.
You may also like to know our report on the evolution of FELTAG (June 2016) is now available. This offers further examples of effective technology-enhanced practice from colleagues across FE and skills sector.
We are delighted that 16 suppliers have so far responded to the Jisc and UCISA invitation to complete our EMA system requirements template.
The template was designed to clarify the core requirements of UK He providers. The responses will help universities and colleges better understand the ways in which particular systems can support their needs and to see how particular combinations of systems might work together. The template is based on our assessment and feedback lifecycle and is intended to be used in conjunction with our guide to EMA processes and systems.
Download the attached Excel spreadsheet (see below) to view a summary table and all of the individual responses.
Please note the following caveats:
- The data is based on self-reporting by the suppliers. Neither Jisc nor UCISA has tested the products and inclusion in this listing should not be taken as an endorsement of particular products
- The individual responses are recorded under supplier name not product name. If you are having trouble finding a product check you have the correct supplier. For example: Banner = Ellucian; Canvas = Instructure; Moodle Coursework = University of London; URKUND = Prioinfocenter
- The list of requirements relates solely to EMA and may not represent the full functionality of the systems included here eg student record systems cover many functions other than assessment
- This listing includes products intended to cover most of the EMA lifecycle as well as some more niche products. It is intended as a means of identifying which combination of products could meet your needs. It is not a like-for-like comparison of similar systems.
We will continue to update this information from time to time to cover additional products or new releases of the featured products. Suppliers wishing to submit new or updated information should contact Lisa Gray email@example.com
You are welcome to use the comment facility on this blog to discuss and share information about the effective use of all of these products and particularly their interoperability to help others.
Not all universities or colleges are currently implementing EMA organisation-wide. Some of you have told us you are looking for guidance on how to get started with EMA on a smaller scale.
In this podcast Bryony Olney from the University of Sheffield talks about how she went about organising an EMA pilot in her department.
You can also download a transcript of the case study.
Jisc and UCISA have been working with universities to develop a statement of requirements to help suppliers understand the needs of UK higher education in relation to the electronic management of assessment (EMA).
We have created a template for suppliers to respond to those requirements to help universities better understand the ways in which particular systems can support their needs and to see how particular combinations of systems might work together. The template is based on our assessment and feedback lifecycle.
We are now issuing an open invitation for suppliers who support the electronic management of assessment to respond and complete this template, by Friday 5th February. The downloadable template is attached to this blog post.
The template is being shared with all of the major suppliers of student record systems and learning platforms in the UK as well as suppliers of the most commonly used assessment related products. We welcome participation by any other interested suppliers. Please email your responses back to firstname.lastname@example.org. We hope the template is self-explanatory but feel free to contact us with any queries.
Responses will be published on this blog to help universities making system decisions. The full responses will be available to download and we will also compile an overview table which will pull out all supplier responses to the ‘included’ column.
Suppliers will be encouraged to update their initial responses as new product versions are released.
Download the template here:
This is a draft version of content for a new Jisc guide on EMA processes and systems that will complement the existing guide. Here on the blog we have split the two to help readability. This post should be viewed in conjunction with that on processes. The drafts will be open for feedback to help us improve the guide until 5th Jan 2016 so please do share your thoughts using the ‘comment’ function below.
Because the assessment and feedback lifecycle covers so many different functions most institutions need a range of systems to support all of their activities. The key areas covered by information systems are generally:
- course and module information including assessment details
- student records including marks, feedback and final grades
- submission of assignments
- marking and feedback
- academic integrity checking
- online testing and examinations
Ideally these systems should be able to exchange data readily so that institutions can mix and match technologies based on needs, preferences and making best use of the systems they already have. Currently however, interoperability between systems remains a key problem area. The expectation is that modern IT systems should have good APIs (application programming interfaces) ie a set of routines, protocols, and tools that describe each component of the system (data or function) serving as building blocks to create a plug and play architecture. In practice though the emphasis is still very much on creating a set of interfaces to move data around between systems on a point-to-point basis. This is complex to achieve and brings with it a maintenance overhead as whenever a particular system is changed, a series of interfaces must be rewritten to update the links to all of the other systems.
The systems are not the only problem. System integration often throws up a host of issues around institutional business processes, workflows, data definitions and data quality. This is why we have tackled the two topics in tandem. You need to ensure your data and processes are not an obstacle to making best use of your existing systems or to effective implementation of new and better systems.
Through collaborating with a working group of c.30 universities and the membership of UCISA we have identified the core requirements that UK higher education institutions have for information systems to support assessment and feedback practice.
The requirements are presented in a downloadable format that maps to the assessment and feedback lifecycle and which has supporting user stories to illustrate why the functionality is necessary. They are also viewable as embedded pop-ups as part of our EMA process maps.
Because we have concentrated on what is fundamentally important to all HEIs, all of the requirements should be considered as being ‘Must have’ priority.
Download requirements list as an Excel template
See the requirements embedded in our process maps
The specification has been publicised via Jisc and UCISA channels and suppliers of products of relevance to the EMA lifecycle are invited to use our template to highlight which of the requirements are supported by their product. Supplier responses are published on our EMA blog and customers of those suppliers are invited to use the blog for comment and discussion. The idea is that by sharing knowledge about effective use of a particular product, or about integration between a particular set of products, we can help institutions to get the most out of their existing investments.
As a supplier we suggest you continue to:
- consider the specification when preparing your product roadmaps
- update your response as new versions of your product are launched
- Respond to customer discussion on the blog so that the wider community can develop a better understanding of your product.
The requirements specification can be used as a basis for developing an ITT to select a new system for your institution. This will not only save you work; you can also have confidence that the major system suppliers will be familiar with the requirements expressed in this way so you have a better chance of getting accurate and meaningful responses.
Using this list as a starting point you can select the parts that are relevant to your particular procurement exercise and add features that are desirable for your institution as well as further detail about your existing product set that will need to interoperate with the new system.
For more guidance on how to go about choosing new technologies to meet your needs see our guide to selecting technologies. This will take you through managing a selection project, defining your requirements and conducting supplier evaluation.
This is a draft version of content for a new Jisc guide on EMA processes and systems that will complement the existing guide. Here on the blog we have split the two to help readability. The drafts will be open for feedback to help us improve the guide until 5th Jan 2016, so please do share your thoughts using the ‘comment’ function below.
EMA in higher education: processes and systems
This guide forms part of our suite of Assessment resources. It is intended to help those of you in HE review and improve the business processes that support your assessment and feedback practice and make the right choices as regards supporting information systems. We have created this guide for two main reasons:
- Our research has shown that many institutions struggle to get the most out of their information systems due to the variability and complexity of their business processes
- Both suppliers and institutions tells us there is a need to see core UK wide system requirements clearly expressed
This guide contains resources to help people who manage assessment processes from both the academic and the administrative point of view, people who manage the supporting EMA systems and suppliers of EMA systems (ie systems supporting online submission, marking and feedback of assignments). It includes:
- A simple overview of the submission, feedback and marking process concentrating on the core tasks that all institutions need to carry out
- A set of requirements that specify what EMA systems need to be able to do to meet these core UK requirements
- A set of prompts to help you compare your own processes to our model and analyse where improvements could be made
The guide should be used in conjunction with our description of the assessment and feedback lifecycle in the guide to Transforming assessment and feedback with technology.
The purpose of this guide is to help you implement processes that meet all necessary quality standards but which are no more complex than they need to be. Principles underlying the approach are:
- Ensuring tasks are carried out by the right people. We have identified tasks as being: a learner responsibility, an administrative matter or a task requiring academic expertise
- Automating any routine administrative tasks that can be automated. Digital information immediately opens up possibilities for streamlining the manual workload associated with dealing with vast quantities of paper
- Ensuring processes are undertaken for sound academic reasons. We hope you will use these resources to challenge complexity that exists for no better reason than “We’ve always done it that way”.
We can help you identify what a process needs to achieve, what skill set is needed and where information systems can help. What we can’t do is design standard workflows that will suit your institution. Individual workflows will depend on how your organisation is structured and what information systems it uses. You may well have different workflows for different assessment situations and this is fine so long as you are sure there is a valid reason why each variant exists.
Research for our 2014 EMA landscape review showed that responsibility for assessment and feedback policy and procedure is often devolved to local level within institutions. What this means in practice is that large institutions rarely carry out a particular function by means of a single, institution-wide, business process. Different faculties, schools, departments or even programmes, each have their own ways of doing things. This level of process variation is an inhibitor to achieving the efficiencies and benefits possible through the application of EMA technology because a series of time-consuming and cumbersome workarounds are likely to be needed to adapt the system to many different ways of carrying out the same activity.
Participants in our research frequently commented on the extent to which new technologies are ‘bolted on’ to old processes without people really taking the time to stand back and consider what the process is really intended to achieve.
In some cases poor process design is due to lack of time and appropriate skills. During the Jisc assessment and feedback programme, a concern was voiced that academic staff often find themselves on a ‘treadmill’ due to poorly designed processes. Their workload is such that they cannot pause to think about doing things differently. They also recognise that they do not have the skills to undertake process review and redesign without some more specialist support yet they know that they cannot improve their pedagogy without better designed processes.
In other cases a significant part of the problem is the persistence of ‘organisational myths’ surrounding policy and process. The tendency to do things the way they have always been done is perpetuated by a belief that this is somehow enshrined in local or institutional policy. When challenged on existing approaches, academics are often surprised to find that many characteristics of the process are matters of historic choice rather than regulatory issues and, indeed, often surprised at how few regulations there actually are or how easy it is to make changes to perceived blocks and barriers in the regulatory frameworks. Variation in the application of assessment policy across an institution is often down to such myths about what actually constitutes the policy in the first place.
Example: Impact on student experience
Different approaches to carrying out the same task impact not only staff workload but also the student experience. As an example one institution that had the capacity to accept e-submission of all assignments based on the written word noted the following variations:
- One faculty accepted e-submission for postgraduates only but then printed out the assignments for marking
- Some course teams were happy to accept and mark submissions electronically but students were still required to submit a paper copy to meet the requirements of the coursework receipting system
- One department required students to submit a hard copy for marking and also an electronic copy to be submitted through the plagiarism detection system
Do we really believe you can ever reduce this complexity down to 10 steps? That’s the challenge for you. We aim to show that there are 10 core tasks in the process and that by making good use of EMA systems you can cut out much of the manual intervention that consumes resource, introduces error and adds little value to the learning experience.
Shown below is our overview of the submission, marking and feedback process. Click on the image for an enlarged view.
his model covers all types of summative assessment where there is a mark given as well as feedback. It also covers iterative processes where students might undertake formative checking of their own work using text matching tools to review academic integrity, might undertake self or peer review and might be required to show evidence that they have engaged with their feedback before their mark is released.
Why are there 11 task boxes rather than 10? Call it cheating but we decided that ‘Apply penalty or mitigation’ is single task but might occur at different times depending for example on whether the penalty was for late submission or academic misconduct.
This is a high level overview. There is no right or wrong way to draw a process map – you need to choose the level that is right for you. For example you might choose to break down some of the sub processes further. Find out more in our guide on process mapping.
N.B. The guide should ultimately offer options to download as an interactive PDF and as a Visio file to edit
Below is an example of the submission phase broken down into more detail. This time we have identified the role of the EMA system to show which tasks can be fully automated. If you follow the link to the interactive version and hover over the question marks you will see hints and tips about managing the task.
N.B. The guide should ultimately offer options to download as an interactive PDF and as a Visio file to edit
N.B. Interactive view similar to the above to be added
N.B. The guide should ultimately offer options to download as an interactive PDF and as a Visio file to edit Marking and feedback process i2
We suggest you use our 10 step model (and the detailed breakdowns) to compare against your own practice and ask yourself the following questions:
- Are you doing additional tasks – if so, why?
- Are the tasks being done by the right people eg do you have academic staff undertaking administrative duties that do not require academic judgement?
- Do you have systems that could carry out some of the tasks you are doing manually?
- Do you have multiple ways of performing the same task – if so, why?
You can find out more about ways to help improve your processes in our guide to process improvement.
Our guidance on process mapping includes some simple and user friendly techniques that can be used with all types of stakeholders to analyse issues with current approaches and suggest and evaluate ideas for change. In particular some of the techniques you might consider are: flow charts with swim lanes, rich pictures and RAEW (Responsibility, Authority, Expertise and Work) analysis and details on all of these can be found at the end of the process mapping guide.
You can also find out about how some universities have benefited from so-called Lean approaches at the end of our section looking at what is a process?
Here are some further example of how different universities have approached the mapping of assessment and feedback processes to help them review practice:
N.B. Examples will be linked from the guide.
- The University of Oxford, department of continuing education used a swim lane approach with all actors identified developed in Excel. CASCADE workflows
- The University of Hull has created a more detailed version of our overview with its individual systems identified. Hull Ideal process
- The University of Sheffield, department of town and regional planning, used before and after process maps to review assignment submission. This is an example where the department identified a continued need to support marking on paper. Sheffield TRP Assignment Submission Process Sheffield TRP Assignment Submission Process – To Be
- University X has used our lifecycle to produce a roadmap for an ambitious development programme over a three year period.
Work is progressing well on a diagnostic tool that will help you assess the state of progress with electronic management of assessment in your area of work. At the end of the assessment you will be able to able to estimate your current state of maturity and develop an action plan.
The tool uses five stages of maturity:
|Researching||You are at an early stage of EMA. You do not seem to have a comprehensive view of organisational activity overall; policy, process and systems seem fragmented. Ensure you have senior management support to undertake further investigation. Start by defining the principles that underpin assessment and feedback in your organisation and find the areas of good practice you can build on.|
|Exploring||You are probably aware of pockets of good practice but have not really begun to try to scale this up. You will need to be clear about expected benefits in order to effect the cultural change needed.|
|Embedding||You are at a tipping point where fairly widespread experimentation is close to becoming mainstream practice. A key issue will be ensuring that business processes are sufficiently consistent to support a more holistic approach.|
|Enhancing||You are probably already supporting the core of the assessment and feedback life cycle with technology and looking to fill gaps and find more elegant solutions to existing workarounds.|
|Pioneering||You are looking to go beyond automation, standardisation and efficiency gains to ensuring that EMA has a truly transformative impact on learning and teaching in your organisation. Your organisation is probably a provider of many of the resources in our toolkit but we hope we can still provide some inspiration and support.|
The tool has been tested with a pilot group of institutions. As a result of their feedback we are making improvements so that the tool better meets the needs of those working at a more localised level than whole institutional change.
We hope to be launching the final version early in the New Year so keep an eye out for more updates.
The self-assessment tool is being developed using the Jisc co-design approach and we are particularly grateful for participation from the following institutions:
|Anglia Ruskin University|
|Birmingham City University|
|Manchester Metropolitan University|
|University of Bradford|
|University of Edinburgh|
|University of Hull|
|University of Nottingham|
|University of Sheffield|
|University of Southampton|
|University of York|
Moving the management of assessment online has opened up new possibilities and expectations around the role of assessment in learning. One of these is the use of holistic feedback; feedback that comes from across the years of a degree programme, and not just the modules a learner is enrolled in at the moment.
History: the solution workshops
You could call a system feature that offers such functionality a ‘feedback hub’, and when Jisc worked with the sector to identify unmet needs in the EMA area, it was clear that this was one of them. With a feedback hub, it becomes possible for students and their tutors and other educators to identify strengths and weaknesses in their learning and focus feedback on them over more than one assessment cycle.
The features of feedback hubs
In order to provide holistic feedback, feedback hubs ought to have a particular set of features. Over the course of two workshops, colleagues from a range of providers defined a set of those. Following a further round of whittling and prioritising, the following ten features emerged, ranging from crucial to nice to have:
- aggregate feedback from all modules/years and systems
- aggregate grades as well as feedback
- provide different views for staff and students
- facilitate dialogue about the feedback
- facilitate dialogue about the feedback in stages (after the student has responded to questions, for example)
- provide different views structured by different programme wide learning outcomes
- add feedback manually at any time
- provide access to comprehensive feedback offline
- allow students to rate feedback
- integrate with learning analytics solutions
Of the first defining feature, the need to aggregate feedback from across different systems, is at least as important as the ability to aggregate over time, and probably more difficult. Some institutions manage to do all assessment in one system, but the majority use at least a VLE and an assessment service such as Turnitin, and usually a student record system as well. Some also have an assessment management system such as co-tutor in the mix.
Current state of development
Feedback hubs are a relatively new phenomenon, and, as explored in an earlier post, there’s quite a variety of systems that provide the functionality as a result. Progress has been remarkable in the course of a year, and users of systems such as the MyFeedback Moodle plugin, Pebblepad’s Flourish or the Brightspace VLE can start developing holistic feedback practice this academic year.
Others are not far behind, though much still depends on system integration. Fortunately, the IMS Assignment work that was also reported on earlier should help make it much easier to exchange assignments and information about assignments.
The potentially trickier question is how to develop practices in using feedback hubs as part of business-as-usual assessment processes. The issue is a classic Catch 22: vendors would like to know what users really use, but the users don’t know what they need until they’ve tried it. Nonetheless, with the first institutions going live this year and the next, we should be able to learn from practical experience soon.
We explored in detail the potential options for Jisc to add value in this
space. These can be summarised as:
- Addressing the ease of access to feedback. The main technical barrier identified by vendors and institutions alike for the provision of holistic feedback was the difficulty of gathering assignments and feedback from the range of systems where feedback is typically held. To this end, one option would be to work with the recently set up IMS LTI Assignment Taskforce, which is working to develop a common specification that should ease the movement of assignments or information about assignments from one system to another.
- Partnering with existing vendors in the development of software, or exploring the development of new tools
- Supporting a forum where specialists from learning providers and vendors could come together to share early priorities and findings to guide software development at first, and good practice later. Such a forum could facilitate the development of software and practice in tandem, and could not only break through the requirements catch 22 by speeding up the development of common holistic feedback practice, but also provide a means of testing software and software integrations in a variety of contexts
- Finally, ways of bootstrapping feedback hub functionality by organising and co-sponsoring crowd funded features were also explored.
After a full review of the options, it was agreed to focus on option 1 – to address the key issue of providing easier access to feedback from across systems. We will look at the models of marking that have already been developed, and will specify them at a detailed data model and protocol level. These will be available to anyone to take up and use and shared with the IMS LTI Assignment task force. That means that assessment workflows that are common in the UK will be used to inform the design of the protocol that systems such as VLEs and assessment services use to integrate with each other. At the same time, the vendors in the IMS group will get a detailed and relevant view of real-life use cases to guide the design of the new specification.
What this also means is that assessment processes can be examined from the high level of the assessment life cycle, to the 10 step EMA process, to the particular workflows of the 5 models of marking, to data model and protocol level choreographies of which bits of data get exchanged between systems and in what order. We’ll have to see how many of the marking models can be covered in this way, because each of them do sum up quite a lot of variability. The aim of the detailed models will be to inform not just the IMS work, but also system developers as well as those who need to configure, integrate and maintain these system in institutions.
In addition, we will take the feedback hub features identified above and bundle them into a general EMA system requirements template that vendors can use to describe the capabilities of their products (which is already in train as part of the wider EMA project). That way, we can surface how systems are currently meeting these demands for a holistic view of feedback in the context of wider EMA capabilities.
Due to the rapid development of feedback hub software, the development of new tools, or working in partnership with existing vendors on the development of tools weren’t seen as viable options. It was also felt that the development of a community of practice, although potentially useful, wasn’t something that Jisc is currently best placed to take forward.
We are very much looking forward to seeing how the development and use of these tools in universities and colleges emerge and grow over the coming months and years, please get in touch to share your stories.
List of hubs
As part of the feedback hub research, we’ve had a look at the following systems:
MyFeedback Moodle plugin
University of York BB plugin
The Bedford College Moodle Grade Tracker
Edinburgh College of Art assessment and feedback system
University of Portsmouth Moodle – TII integration
Anglia Ruskin University Sharepoint – TII integration
We’re particularly grateful to these system providers and the wider community for their contributions to this research.
As you all know we’ve been working on the development of an online toolkit to bring together examples, guidance and resources to support universities and colleges with their use of technology for assessment and feedback. We’re pleased to say that the first element of that suite of resources is now available, the ‘Transforming assessment and feedback with technology‘ online guide, with other elements due for launch in March 2016.
We’re keen to gain your thoughts on this resource, and have created a short googleform to capture these. So please do have a browse, and tell us what you think! This feedback will help to inform the final shape of the resource (the form will be open till the end of December).
The full suite of resources when launched fully in March will also include:
- an EMA self-assessment tool (a development requested by the working group) leading to resources within the guide (currently being tested);
- and also the EMA workflows / systems specification resources embedded within the guide (see previous blog post on where we are up to with those).