Measure what you want to know

I was examined in Hebrew and History: ‘What is the Hebrew for the Place of a Skull?’ said the examiner. ‘Golgotha’, I replied. ‘Who founded University College?’ I answered, ‘King Alfred’. ‘Very well, sir’, said the examiner, ‘then you are competent for your degree.’ (Lord Elton, quoted in James Woodforde’s Diary of a Country Parson)

Education is not random

In teaching, we do not simply do whatever comes to mind. We aim to achieve certain learning objectives, which indicate what students should learn from our teaching. The learning objectives thus constitute the starting point for composing a test. In the construction of a test, it is important to keep the learning objectives close at hand. Ideally, all of the learning objectives should be assessed, so that a proper judgement can be made concerning the extent to which each student has successfully completed the module. If a course has a large number of learning objectives, the time available for testing (and the prescribed form of testing, e.g. an oral test) may necessitate making a selection from the learning objectives. It is nevertheless advisable to ensure that as many of the learning objectives as possible are explicitly tested. One way to do this could be to assess a part of the learning objectives during the course of the module, thus eliminating the necessity of addressing everything in a concluding testing session.
 

Using a test matrix

Analysis of the testing practices of teachers in higher education provides clear evidence that many tests do not correspond to the learning objectives that have been specified for the relevant modules. In many cases, teachers are either unaware or insufficiently aware of this. A convenient supporting tool is available for helping teachers arrive at balanced tests that ‘assess what is supposed to be assessed’ (i.e. the extent to which the learning objectives have been achieved): a ‘test matrix’.

Example of a test matrix for an introductory course in physics

Topic

Share of the course

Knowledge

Insight

Application

Problem-solving

Total

Classic field theory

50%

15%

15%

20%

0%

50%

Electromagnetism

30%

10%

10%

10%

0%

30%

Quantum mechanics

20%

5%

10%

5%

0%

20%

Total


30%

35%

35%

0%

100%


Creating a test matrix

To create a test matrix, the topics to be assessed are derived from the learning objectives and then listed one after the other (Column 1 in the matrix). For each of the topics, a percentage is stated to indicate its relative importance within the course as a whole (Column 2 in the matrix). The time that a teacher devotes to each of the topics during the lectures and/or the detail in which the topics are discussed in the course materials play a decisive role in this regard. The levels at which students are expected to have mastered each topic (e.g. knowledge, insight, application, problem-solving) is indicated, based on the learning objectives. A percentage is used here as well, taking into consideration the stated relative importance of the topic (Columns 3–6 in the matrix). Finally, the percentages are summed by both row and column to provide a good overview of the relative importance that should be assigned to each topic and each level of mastery in the test. This relative importance can be reflected in the number of questions and/or in the distribution of points.

As stated in the sample test matrix above, the learning objectives make it clear that an ordinary, ‘classic’ form of testing (e.g. a written test with closed-ended and/or open-ended questions or an oral test) would probably suffice for the module in question. This is because problem-solving ability need not be tested. Whichever form of testing is selected, it is important to ensure that around half of the test questions concern the topic of classic field theory, with one third focusing on electromagnetism and one fifth on quantum mechanics. Should the teacher choose to ask a comprehensive open-ended question, in which the various topics are addressed in an integrated manner, 50% of the points should be earned through the mastery of classic field theory, with 30% being earned through mastery of the course content on electromagnetism and 20% through mastery of the learning content on quantum mechanics.

Similar reasoning is applied to the level (knowledge, insight, application, problem-solving) at which questions are aimed and/or for which points can be earned. For example, in the module discussed above, a student who is merely capable of reproducing knowledge but who has not processed it into comprehension or who is unable to apply it to a concrete exercise (or situation) should earn only 30%. More specifically, this again means that about 30% of the questions should assess pure knowledge and/or that students should be able to earn 30% of the points on the test by demonstrating that they have mastered the learning objectives at the level of knowledge.
 

Advantages of using a test matrix

Briefly stated, a test matrix helps to prevent both the over-assessment and under-assessment of specific learning objectives in terms of content and/or level. A well-constructed matrix also provides suggestions for arriving at comparable examinations for different examination periods (e.g. for the June and September periods) and for different students (e.g. for oral examinations). With regard to the latter case, to prevent the oral examination of Student x from being completely different (in terms of the topics and levels assessed) from that of Student y, it would be advisable to work with question sheets in which the questions have been carefully assembled in advance. If students must draw different question cards, there is much less control over the extent to which the test as a whole corresponds to the test matrix.

Want to know more?

Davies, J.P. & Pachler, N. (2018). The context of the Connected Curriculum. In J.P. Davies & N. Pachler (Eds.), Teaching and Learning in Higher Education: Perspectives from UCL (pp. 3-20). London: UCL IOE Press.

Norton, L. (2009). Chapter 10, Assessing student learning. In H. Fry, S. Ketteridge, & S. Marshall (Eds.), A handbook for teaching and learning in higher education: enhancing academic practice (3rd ed., pp. 132-149). New York Abingdon: Routledge.

Wakeford, R. (2003). Principles of student assessment. In H. Fry, S. Ketteridge & S. Marshall (Eds.), A handbook for learning and teaching in higher education (2nd ed., pp. 42-61). London: Kogan Page.

The Graide Network (Sep. 10, 2018). Importance of Validity and Reliability in Classroom Assessments. Retrieved from The Graide Network website (accessed August 23, 2019).

Gyll, S., & Ragland, S. (2018). Improving the validity of objective assessment in higher education: Steps for building a best-in-class competency-based assessment program. The Journal of Competency-Based Education, 3(1).

Morgan, C., Dunn, L., Parry, S. & O'Reilly, M. (2004), The student assessment handbook : new directions in traditional and online assessment , London: Routledge Falmer.