An assessment passes through many hands and even more systems on its way from setting to recording the final mark. Partially because it is so central to teaching and learning, it’s a process that seems unusually complex, and involving many touch points in its electronic form. We’ve sequenced the whole trajectory of a typical assessment to see what system integration lessons can be learned.
A well known rule of thumb in process or architecture modelling is to limit diagrams to seven blobs, plus or minus three. The Jisc Electronic Management of Assessment (EMA) ten step model sticks to that adage to great effect. The UML sequence of an assessment’s journey discussed here does not- not even when chopped into four phases.
The sequence diagrams are, therefore, meant more as a means of surfacing common bottlenecks and other data flow issues rather than a means of elucidating the EMA landscape. As such, it works well, but some limitations need to be borne in mind.
One limitation is that the sequence modelled is fairly typical, but cannot be representative of all EMA processes, as there is too much variation in practice. For example, the modelled sequence does not include peer review, which would make the journey more complex still. It also doesn’t include anonymous marking, though that wouldn’t necessarily affect the flow much. The assessment is single part, and the assumption has been that people mark online. The sequence focusses on system interactions, so actions like logging in have not been included.
Crucially, the sequence assumes a fairly typical use of a combination of a VLE such as Moodle and an assessment service such as Turnitin, rather than the use of either one in isolation.
The resulting sequence has been broken into four stages:
- setting and editing the assessment
- doing the assignment iteratively, with feedback
- double marking
- marks and feedback release, exam board moderation
Looking across them all, a couple of issues become clear. One is to do with the synchronisation of information across systems, the other is rooted in the still imperfect combination of desktop and online systems.
The VLE as the centre of the teaching and learning universe
The assessment journey outlined here is based on the current practice of VLE and assessment service integration via VLE plugins. What’s noticeable about that practice is the fact that all interactions start at the VLE, even if the main business is at the assessment service. This is evident in many little two step hops:
In order to select an assignment, the user first needs to call up a page in the VLE about the assessment ‘y’, the VLE then grabs the relevant list ‘y’ from the assessment service, and that gets shown in the user’s browser. The reason why this hop is needed becomes clear at release points:
An edit to the assessment configuration such as changing the release date needs to be synchronised between the VLE and the assessment service. One way of doing that is to make sure any interaction is initiated from the VLE, which the assessment service always follows.
The consequence of such a master-slave relation is that it entrenches the status of the VLE as the centre of the teaching and learning universe, which may or may not be the aim of a particular organisation.
It is possible to change all that with a different protocol where changes can be initiated from either end, like so:
The disadvantage of that solution is that it makes considerably bigger demands on the integration part of both systems. Each has to be ready to receive a change from the other at all times, and able to act upon it. There can’t be dropped messages, and duplicated messages can be a problem too. In case of doubt, there has to be a way of determining which system is right. If they can’t do this, there is a risk of assessment details such as deadlines getting out of sync.
Still, a desire to re-align wider institutional learning environments may push us in the direction of system integration protocols that are more two-way.
Desktop and online integration and the submission bottleneck
Another aspect that the sequence diagrams make clear is the complexity introduced by moving data from the internet to the desktop and back again. Some of that is clear in the final ‘release feedback’ view, particularly where marks and assignments need to be shuffled to and from external examiners via email.
But the desktop obstacle is particularly noticeable in the ‘doing assessment’ view where a good deal of the interactions are all about navigating from the VLE to the assessment service interface, then picking the assignment, and uploading it, before getting a receipt. And that’s before considering the fact that assignment(x) relates to multiple separate files.
What’s worse, the assignment submission dance is very time critical because of assessment deadlines, which, worse still, all tend to fall at around the same time across the whole country. The resulting load on the assessment service can easily lead to significant performance issues at the worst possible time.
This suggests that eliminating the desktop part of the sequence, and keeping the whole operation online could streamline the process. Current VLE and assessment service integrations with online authoring environments such as MS Office 356 or Google Docs do not really achieve that, however. The reason is that they still treat integrations as a series of user driven events in which the desktop file system is simply substituted with an online one. No real use is made of the fact that online authoring environments are persistently available to the VLE and the assessment service.
If they did make use of that persistence, the VLE and the assessment service could access the assignment from initial conception to exam board moderation, and maybe even beyond, for any purpose at any time without relying on a user. Formative feedback could be given at any stage, and originality checked whenever that function is ready, thus avoiding the submission bottleneck. The persistence of online document storage could also mean that accessing feedback on assignments from different modules and different years (i.e. holistic feedback) becomes much easier. Because neither students nor teachers need to drive the assessment flow by stepping through the submission process repeatedly, the load on them is lightened, and the scope for error reduced.
Systems aren’t quite ready for such an architecture, but it’s easy enough to set up such a process in your favourite online authoring environment providing you’re willing to forego the assessment service functionality.
Because the assessment process modelled here is generic, particular institutions might want to tweak the general flow for their specific flow in order to spot bottlenecks and unneeded complexity that are particular to their processes. For that reason, the UML sequence diagrams are made available in editable form. Fixes and refinements would be very welcome too!
From Jisc’s perspective, one interesting possibility is to look at the steps in the sequence to identify those interactions that we might want to capture in learning analytics data streams. Given how critical assessment is to student success, some of the EMA process steps ought to be valuable raw data for predicting the future performance of a student. An obvious example is how close a student submits an assignment to a deadline, but we can also look at evidence of interaction of a student with feedback on assignments.
Finally, it is striking how an examination of relatively low level system integration protocols can point to much larger questions that could do with further exploration: whether the VLE really should continue to be cemented as the central coordination point in teaching and learning, and whether it is possible and desirable to set up an online assessment process that is centred on incremental authoring rather than file submissions and deadlines.