Effective Marking Techniques

  Share by Email   Print this article   More sharing options  

I’ve written before with some tips about managing the exam period efficiently. In this piece I want to follow up a broader discussion about your institution’s approach to marking.

Assessment is a central part of any university’s operations, and it is crucial that you understand not only your institution’s guidelines, but the marking principles that lie behind them. This will enable you to make effective choices when it comes to planning assessment of your own modules, as well as helping you to maintain benchmark standards when you are marking work yourself.

Checks and balances

Marking systems across universities tend to be fairly standardized, with subject benchmarking and the external examiner system requiring such checks and balances as the double marking of all assessment that contributes to final degree marks. There is widespread agreement that blind-marking (i.e. two examiners marking the same piece without consultation) is the fairest means of operating a double marking system: this means that each piece of assessment is given two chances to be assessed, without prejudice. For similar reasons most institutions now operate anonymous assessment.

Agreeing marks

Markers then meet and (maintaining anonymity with regard to scripts) agree on a mark on the basis of the two marks separately arrived at. Where agreement cannot be reached (in the event of a difference of two classes, for instance), such assessments should be passed onto the external examiner to adjudicate.


Some subjects operate a random moderation approach, selecting, say, 10-20% of coursework or exams to be reviewed by the examiner. Moderation involves checking that the marks awarded tally with the institutional guidelines, and that the feedback sits appropriately with the marks. The examiner, however, is not usually required to re-mark (effectively third-mark) individual pieces of assessment; and rarely, the examiner will make a ruling about the level of the cohort overall, and will apply a uniform adjustment accordingly.


Pegging marks is an effective way of eliminating redundancy in the percentage-based mark-scale (in which, theoretically speaking, any mark between 0 and 100 is possible). Pegging means that a greatly reduced set of marks is possible within each class or band. For instance, marks will be awarded only at 42, 46, 48, 50, and so on up the scale. This has the advantage of limiting the number of marks per class to 4 (instead of a possible 9). In this system markers must decide whether to award, say, a 58, or a 60 – but a borderline mark of 59 – with all its difficult mathematical (and for the student, emotional) ramifications is avoided.

Criteria vs curve-based systems

Most institutions today mark on a criteria-based system – that is, marking is completed in accordance with a set of pre-existing criteria (institutional guidelines or assessment criteria) which outline the broad competencies that should be demonstrated in order for a piece of work to be given a certain grade. Performance is therefore assessed in accordance with specific, pre-existing criteria defined by the curriculum or benchmarking. This has the advantage that standards remain relatively consistent over time and across the sector. Nevertheless there are disadvantages: group performance can vary significantly (particularly with small groups, where random variations in cohort performance can produce noticeably ‘bunched’ results), and high achievers can be grouped with others performing at a significantly less high level.

Curve- or norm-based systems, by contrast, look at establishing the mean mark in a cohort and then distributing marks according to a ‘normally distributed bell curve’ (i.e. with bunching in the middle but clear distribution at either end). These have the advantage of taking into account the top (actual rather than hypothetical) performance in a cohort, and distributing marks on the basis of that performance. However, the pre-determined norm (the bell) also means that only a certain percentage of students in a given cohort will receive a particular mark, because their distribution has to match the ‘bell-curve’. This means that they are, in effect, being ranked in relation to each other, rather than to a set of pre-existing criteria. So although you might want to use curve-based marking for some forms of light-touch assessment, you should be aware of the problems that are associated with it.

Share this article:

  Share by Email   Print this article   More sharing options  

What do you think about this article? Email your thoughts and feedback to us

Connect with us