Research team

How to make intelligibility measurements intelligible? A methodological study extending the framework of comparative judgement. 01/10/2021 - 30/09/2025

Abstract

Suppose a speech therapist or a teacher is asked to estimate the intelligibility of a young child's speech. Speech intelligibility is an intuitively appealing but also a difficult notion to define formally. So how can it be measured reliably, validly, and efficiently? Traditionally, both holistic (e.g., assigning a numerical appreciation) and analytical (e.g., assessing against a set of criteria) methods have been used for assessment. However, research shows that the different methods have their pitfalls and shortcomings. A method that has recently been propagated mainly from the angle of educational assessment is comparative assessment. At the basis of this method are insights from psychology that show that people are better able to compare objects than to assign a score to them one by one on a certain characteristic. A speech therapist then assesses two speech languages, for example, and determines which of the two is most intelligible. Such judgments appear to be more reliable than assigning a holistic score. The goal of this project is twofold. First, we want to investigate the generalizability of comparative judging and its mérites in the domain of speech research. Then, we want to extend the method itself and develop an appropriate statistical model. 1. Generalization: the method of pairwise comparison is especially emerging in the broad domain of measuring competencies in the educational context. In this project, we want to closely examine the generalizability of comparative assessment to other domains in which people assess, especially in the domain of the scientific study of speech. More specifically, we ask the question: does comparative assessment lead to reliable, valid and efficient assessments of the intelligibility of young children's speech? 2. Extension: the current form of comparative assessment is limited to a simple pairwise comparison, i.e. dichotomous choosing between two alternatives. The reason is that the statistical model used to model the data, the Bradley-Terry-Luce model, only allows for dichotomous data. However, pairwise comparison can be extended to ordinal and scale-based ways of selection and, in addition, it may be important to take a multidimensional approach to assessment (comparing on multiple underlying aspects of intelligibility). In this study, we extend the statistical model into a generic model that helps to analyze different forms of data.

Researcher(s)

Research team(s)

Project type(s)

  • Research Project