In recent years, media attention has frequently been devoted to how teachers should cope with guessing on multiple-choice tests.

Correction for guessing

Such attention has regularly yielded criticism of the system of correcting for guessing that teachers have been using for years. In the interest of clarity, correcting multiple-choice tests for guessing means:

  • correct answer: + 1
  • no answer: 0
  • incorrect answer: -1 / (number of response options - 1)

For a question with four options, -1/3 points would thus be subtracted for an incorrect answer, with -1/2 being subtracted for a three-option question and 1 point for true/false questions. For correct answers, 1 point is added, with no points being assigned for questions that have been left blank.

Example

A multiple-choice examination consists of 40 four-option questions. To pass, students must earn a score of at least 20/40. This could be achieved by completing 20 questions correctly and not completing the remaining questions [(20x1) + (20x0)]. Alternatively, it could be achieved by answering 20 questions correctly and guessing on 20 questions [(20x1) + (5x1 for the questions guessed correctly) + (15x(-1/3) for the questions guessed incorrectly)].

The recent criticism of this system relates primarily to the fact that correction for guessing poses a disadvantage to students who avoid guessing. This ‘willingness-to-gamble effect’ introduces a level of distortion into the results. In particular, it leads to an over-correction for students who are hesitant to guess and an under-correction for students who are more inclined to take a gamble. Correction for guessing is thus likely to impose a disadvantage on students who are averse to risk and less inclined to guess. For an examination to be as objective as possible, the personality characteristics of students should not play a role.

Higher cut-off

Raising the cut-off is sometimes used as an alternative to correction for guessing. In the system of raising of raising the cut-off, the pass/fail boundary is adjusted in order to correct for the possibility of guessing the right answer. In other words, the correction is not applied to each question (as in correction for guessing), but at the level of the test. We illustrate this with a concrete example:

Example

A multiple-choice examination consists of 40 four-option questions. The theoretical likelihood of guessing the right answer from four response options is 25%, or 10 questions. It is thus possible for students to answer 10 of the 40 questions correctly without having any knowledge. To pass, students must be able to answer 15 of the remaining 30 questions correctly. This yields a cut-off of 25/40 (10+15). In other words, students must be able to answer 25 questions correctly in order to pass this examination (example from the Open University; from Lesage et al., 2013).

​​The difference

What is actually the difference between the two systems? Comparison of the two examples presented above reveals that students who have no mastery of the material and who guess on every question would theoretically earn a score of 0 in both systems. The difference is in the opening assumptions of the systems. The cut-off system essentially encourages students to guess if they do not know the correct answer. This is because they would stand to lose just as many points for leaving a question blank as they would for an incorrect guess. In contrast, the correction-for-guessing system encourages students to answer questions only if they are at least relatively certain of the correct answer. If they are not, they should leave the question blank. Otherwise, they would lose points. In other words, correction for guessing discourages guessing.

With regard to the clarity of instruction, experienced students usually perceive the cut-off system as being clearer. It is simple: they pass if they achieve a score above a specified boundary (e.g. at least 12/20). The instructions for the correction-for-guessing system can seem more complex, due to the different scoring applied for blank/correct/incorrect answers. Programmes are now increasingly opting to apply a single system throughout the programme and to use sample examinations.
 

So what…?

In this tip, we list the various systems for coping with guessing, along with their respective advantages and disadvantages. We do not draw any conclusions in favour of either the correction-for-guessing or the cut-off system. Despite 50 years of research, there is still not enough evidence on which to base a sufficient system for coping with guessing on multiple-choice tests (Lesage et al., 2013).
The new system that was recently introduced by KU Leuven is characteristic of this ongoing search. The university did not agree with the reasoning of the cut-off system (e.g. as applied at Ghent University), which encourages students to guess, even if they have no idea of the correct answer. For this reason, they developed a system requiring students to indicate whether the various response options are ‘possible’ or ‘impossible’. This system operates like the correction-for-guessing system, but does not impose a disadvantage on students who are more averse to risk. The system is still in the testing phase.

In conclusion, there is one essential aspect of multiple-choice tests that should not be overlooked in the entire debate on ways to correct for guessing. Guessing is not the only factor that can influence the reliability and validity of multiple-choice tests. Non-representative questions, unclear questions, questions that contain clues to the correct answer, too few questions, faulty answer keys, questions that are overly easy/difficult with regard to the expected level and similar characteristics undermine the reliability of the examination just as strongly. In many cases, efforts to address these aspects could yield a more reliable and valid examination than would the application of correction systems aimed at counteracting guessing (Lesage et al., 2013).
 

Want to know more?

Haladyna T., Downing S. & Rodriguez M. (2002), A review of multiple choice item-writing guidelines for classroom assessmentApplied Measurement in Education, Vol. 15(3), pp. 309-334.

Ghent University (2013). No more negative marking in multiple choice questions. Retrieved from the Ghent University website (accessed on August 23st 2019).

Lesage, E., Valcke, M., & Sabbe, E. (2013). Scoring methods for multiple choice assessment in higher education – Is it still a matter of number right scoring or negative marking?. Studies in Educational Evaluation, 39(3), 188-193.