8.00 - 9.00: Registration


9.00 - 10.00: Keynote 2 - 'Access as a conversation’ and participatory approaches in media accessibility

Kate Dangerfield

The future of media accessibility (MA) is defined by collaborative partnerships and mutual understanding, as explained by Mary Carroll and Aline Remael (2022) and “a creative and inclusive movement is evolving among translator-practitioners and researchers (ibid: 2). Yet scholars within MA coming from a (critical) disability studies perspective argue that despite the awareness surrounding participation, MA practice and research are still not truly participatory nor inclusive.

Furthermore, access in MA is typically presented as a one-way process; a solution to a problem, namely the lack of certain, normative abilities, such as hearing or seeing. In other words, access is typically provided by non-disabled people to work produced by non-disabled people. It is access as a monologue, rather than as a conversation (Romero-Fresco & Dangerfield, forthcoming), which pushes people with a disability into a passive role and reiterates a traditional approach in which services are provided exclusively for people with a perceived impairment (Fryer, 2018).  As John Lee Clark, the deafblind poet writes, “[t]he way [...] [access services] are lobbied for, funded, designed, implemented, and used revolves around the assumption that there’s only one world [a non-disabled world] and ignores realms of possibility nestled within those same modes” (Lee Clark, 2021). In other words, traditional approaches in MA, disability scholars like Georgina Kleege, Piet Devos and Hannah Thompson emphasize, are rooted in ableist assumptions.

Fundamental to these ideas, is the concept of the body politic, which is one the most important concepts in western philosophy, as Bruno Latour, et.al (2020) writes. For Latour, et.al (2020), a coherent definition of collective bodies is lacking across disciplines and a better understanding of “the relation between collective and individual agency” (ibid: 1) considering human and non-human entities is needed in the New Climatic Regime. Following Latour, et.al (ibid), in this paper, I explore the origin, nature, quality, impact and undertone of the concepts of the individual and the collective and look at how these concepts are defined in media accessibility and what this means in terms of participatory approaches.

Carroll, M. and Remael, A. (2022) “Sketching Tomorrow’s Mediascape – And Beyond” Journal of Audiovisual Translation, 5(2),1–14.

Fryer, L. (2018). “Staging the Audio Describer: An Exploration of Integrated Audio Description.” Disability Studies Quarterly 38(3).

Latour, B. Schaffer, S. and Gagliardi, P. (eds) (2020). A Book of the Body Politic: Connecting Biology, Politics and Social Theory. Italy: Fondazione Giorgio Cini.

Lee Clark, J. (2021). “Against Access.” Mc Sweeney’s 64: The audio issue.

Romero-Fresco, P. and K. Dangerfield (fc. 2022). “Access as a Conversation.” The cognitive effectiveness of subtitle processing. 5(2).

Kate Dangerfield is a researcher, filmmaker and accessibility consultant. Her completed practice as research PhD Within Sound and Image focuses on developing the approach of accessible filmmaking by creating space for the people involved in The Accessible Filmmaking Project (in collaboration with the UK charity Sense, funded by the British Film Institute) who have dual/single sensory impairments and complex communication needs. As an ally, Kate is passionate about challenging the disabling barriers that currently exist within society and her work now focuses on developing the concept of ‘access as a conversation’ in theory and practice.

10.00 - 10.30: Break


10.30 - 12.00: Session 14 - Subtitling & emotions

Subtitles and emotions – a psychophysiological study of the emotional correlates of aesthetically integrated and standard subtitles

Pierre-Alexis Mével, Myron Tsikandilakis

This presentation provides an overview of a study carried out by a team of researchers at the University of Nottingham (United Kingdom) to determine the emotional correlates of subtitles that are integrated into a film’s aesthetics – sometimes known as creative subtitles (McClarty 2012), integrated titles (Fox 2016, 2018) or free form subtitles (Bassnett et al 2022). Using clips from the film Night Watch (Bekmambetov 2004), the researchers relied on a between-subject design to measure differences in the reception of standard vs creative subtitles in relation to a key emotion: fear. Through a methodology combining electrodermal activity (EDA), heart-rate responses (HR) and self-reports, the experiment sought to establish whether the integration of subtitles in a film’s aesthetics impact on their emotional reception. The focus on fear is justified by the fact that it is a basic universal emotion that very commonly appears – even briefly – in a multitude of film genres, and it is the most widely explored emotion in Linguistics and Psychophysiology, as regards the anticipated correlates of arousal it induces, without having been tested in the context of aesthetically integrated subtitles (AIS). The proposed methodology is especially innovative but also draws on a growing body of work that relies on psychophysiological measurements in the field of audiovisual translation (Fryer 2013; Matamala et al. 2020). Our findings suggest that AIS resulted in higher physiological arousal and self-report rating responses for higher quality viewing experience. AIS may provide new creative possibilities for content creators, whilst also focussing minds for accessibility providers in terms of creating and using tools that can streamline and democratize the use of AIS. 

Bassnett, S. et al. (2022). Translation and Creativity in the 21st century. World Literature Studies, 14(1), 3–17. 

Fox, W. (2016). Integrated titles: An improved viewing experience? In S. Hansen-Schirra & S. Grucza (Eds.), Eyetracking and Applied Linguistics (pp. 5–30). Language Science Press. 

Fox, W. (2018). Can integrated titles improve the viewing experience? Investigating the impact of subtitling on the reception and enjoyment of film using eye tracking and questionnaire data. Translation and Natural Multilingual Language Processing, 9. Language Science Press. 

Fryer, L. (2013). Putting It into Words: The Impact of Visual Impairment on Perception, Experience and Presence. PhD Thesis. London: University of London. 

Matamala, A. et al. (2020). Electrodermal activity as a measure of emotions in media accessibility research: methodological considerations. JosTrans, 33, 129–151. 

McClarty, R. (2012). Towards a multidisciplinary approach in creative subtitling. MonTi. Monografías de Traducción e Interpretación, 4, 133–155.

Pierre-Alexis Mével (presenter) is a researcher at the University of Nottingham (UK). He is the author of a monograph entitled 'Can We Do the Right Thing? Subtitling African American English into French (Peter Lang, 2017) and has a particular interest in the representations of non-standard varieties in visual media, and on creative captioning for the screen as well as live performances.

Myron Tsikandilakis (presenter) is a researcher at the University of Nottingham (UK).


“If I am translating a film, my emotion needs to appear in the translation”: translators´narratives about subtitling

Érica Lima

The affective turn (Clough, 2007), whose basic premise is that emotions and affections play an essential role in personal, professional, and social life, determining our relationships and world experience, has gained space in translation studies in recent decades. In this context, and considering a transdisciplinary theoretical-methodological perspective, this talk aims to present part of a broader study that analyzes professional translators' oral narratives about the emotional effects of translating non-literary texts in their lives. This presentation will bring some results of this ethnographic research arising from the analysis of excerpts from narratives of three Brazilian translators of subtitles of sensitive audiovisual material, such as documentaries about the holocaust, films about violence against women, and pornographic material. The narratives show how the translators were affected by the translation and how this reverberated in the translation process, enabling a look at emotions based on the strategies adopted in these translations. The method used in this study for data collection was semi-structured online interviews that lasted about one hour for each person. The interview focused on the emotional experience of the translators, and it raised specific issues of how the effects of emotion were perceived on the body and what strategies the translator adopted to avoid emotional involvement with the theme. The selected narratives show, at first, an ambivalence between the recognition that the person is affected during and even after finishing the translation and a fear of showing this involvement to the client. In this sense, it was possible to observe the translators'concern with the impression that they may give to the client if they express an emotional involvement with the translation, as they understand that he expects neutrality and impartiality. In agreement with Lehr (2021), Hubscher-Davidson (2018), and Rojo (2017, the analyzes of these narratives indicate that the study of affective and emotional aspects and their influence on the translation process and the translators´ lives can contribute to the training of translators, enabling discussions about personal and professional engagement, interference in ethical decisions and greater awareness about emotional and physical reactions resulting from translation. As a transdisciplinary theoretical framework guiding the analyses, we resort to the characteristics of narrative (temporality, relationality, causal articulation, and selective appropriation) by Baker (2006; 2018), on emotion studies (Damasio, 2012; Barret, 2017; Hokkanen and Koskinen, 2018) and the performative (Austin, 1990; Robinson, 2003; 2015). It is expected to demonstrate that the translator's emotional engagement with the translation can be something positive and that the translator's role becomes more, rather than less, important in the informational age. 

Austin, J. L. Quando dizer é fazer. Palavras e Ações. Trad. de Danilo Marcondes de Souza Filho. Porto Alegre: Artes Médicas, 1990. 

Baker, M. Translation and Activism: Emerging Patterns of Narrative Community. The Massachusetts Review, vol. 47, no. 3, pp. 462–484, 2006. 

Baker, Mona. 2018 [2013]. Translation as an alternative space for political action /A tradução como um espaço alternativo para ação política. Cadernos de Tradução. 38 (2), 339-380. Florianópolis: UFSC. 

Barrett, Lisa Feldman. How emotions are made. The secret of life of the brain. Boston and New York: Houghton Mifflin Harcourt, 2017. 

Clough, Patricia Ticineto. Introduction. In: Clough, P. T. The Affective Turn. Durham and London: Duke University Press, 2007, p.1-33. 

Damásio, António R. O erro de Descartes. Emoção, razão e o cérebro humano. Trad. Dora Vicente e Georgina Segurado. São Paulo, Companhia das Letras, 2012. 

Hokkanen, S.; Koskinen, K. Affect as a Hinge: The Translator´s Experiencing Self as a Sociocognitive Interface. In: Ehrensberger-Dow M.; Englund D. B. (eds) Exploring the Situational Interface of Translation and Cognition. Benjamins Current Topics 101. Amsterdam: John Benjamins, 2018. p. 75-93. 

Hubscher-Davidson, S. Translation and Emotion. A Psychological Perspective. New York: Routledge, 2018, p.107-146. 

Lehr, C. Translation, emotion and cognition. In: Alves, F.; Jakobsen, A. L. The Routledge Handbook of Translation and Cognition. New York and London: Taylor & Francis Group, 2021, p. 294 – 309 

Robinson, Douglas. Performative linguistics. Speaking and translating as doing things with words. New York and London: Routledge, 2003, p.70-81. 

Robinson, Douglas. The somatics of tone and the tone of somatics. The Translator´s Turn revisited. Translation and Interpreting Studies. John Benjamins Publishing Company. Vol.10:2, 2015, p. 299-319. 

Rojo, Ana. The Role of Emotions. In The Handbook of Translation and Cognition, First Edition. ed J. W. Schwieter and A. Ferreira. John Wiley & Sons, 2017, p. 369-385.

Érica Lima has a PhD in Languages from São Paulo State University, a Master’s degree in Applied Linguistics from The State University of Campinas and a Bachelor’s degree in French Translation also from São Paulo State University. She has been a professor in the Department of Applied Linguistics of the Institute of Language Studies at The State University of Campinas since 2015. Her main areas of interest are the interface between translation studies and trends in contemporary thought (identity, gender, ideology, emotion) and translator education. The paper presented at this Media for All has been supported by the Brazilian National Research Council (CNPq, grant 102448/2022-1) E-mail: elalima@unicamp.br

12.00 - 13.30: Lunch


13.30 - 15.00: Session 15 - Subtitling & eye tracking

Subtitle reading in different online, EMI lectures: how a change of image can lead to a change in reading

Senne M. Van Hoecke, Jan-Louis Kruger

The recent pandemic caused higher education institutions around the world to explore remote lecturing. This remote teaching can happen live, but can also be done through pre-recorded lectures. While recorded lectures are undoubtedly limited in a number of ways, they still have some advantages. One of these advantages is that they can be subtitled in advance, increasing accessibility to multicultural and multilingual student audiences. How the type of lecture impacts subtitle reading, however, has not yet been researched extensively. This paper focuses on the use of Audiovisual Translation (AVT) in education. The bulk of research on AVT in education has examined the benefits of AVT for language learning and accessibility. However, in the past years, an increasing number of studies have moved away from these themes and studied AVT, or more specifically subtitles, and their effects on comprehension and cognitive load in standard education, e.g., the Subtitles for Access to Education project, the Accessibility meets Multimedia Learning project, Hogoshi (2016), Chan, Kruger and Doherty (2019) and Liao, Kruger and Doherty (2020). One of the earlier studies highlighted the importance of actual subtitle reading to study the effects of subtitles in lectures (Kruger & Steyn, 2013). While later studies often do consider subtitle reading and its influence on the perception of the lecture, the effect of the presentation style of the lecture on subtitle reading has received very little attention. One study that does look closely at how subtitles are being processed in semi-educational videos, i.e., a documentary, found that the process of reading subtitles changes significantly when concurrent visual material is present (Liao et al., 2021). The study, however, only looks at whether the visual material is present or not and does not investigate the impact of different levels of visual complexity on subtitle reading. Another study by van der Zee et al. (2017) does look into the effects of visual-textual complexity of subtitled lectures, but this study does not include any eye-tracking measures or make any statements about subtitle reading. The present paper wishes to bridge the gap between these studies by looking into subtitled lectures in different formats and closely examining the subtitle reading process. This paper reports on a large-scale experiment aimed to examine the impact of lecture format on subtitle reading and comprehension. Forty L1 English-speaking students watched three different L2 English lectures with intralingual subtitles in three different commonly used online lecture formats, i.e., a talking head, PowerPoint slides and PowerPoint slides with an integrated talking head, while being monitored with an SR EyeLink eye-tracking system. After viewing the lectures, students were also asked about their experiences with the different lectures. The experiment wishes to create a more profound understanding of how visual complexity influences the reading of subtitles as well as attention distribution between slides and subtitles. Dependent variables will include average fixation duration, fixation count, percentage dwell time, number of crossovers between image and subtitles, and saccade length. The data are currently being processed and results will be reported in the paper.

Senne M. Van Hoecke (presenter) is a cotutelle PhD student at the Department of Applied Linguistics, Translation and Interpreting of the University of Antwerp (Belgium) and the Department of Linguistics of Macquarie University (Australia). He specializes in Audiovisual Translation, more specifically reception of subtitles, Cognitive Translation Studies and Instructional Design. Further research interests include readability and Automatic Writing Evaluation. He is conducting a research project entitled ‘Subtitles for Access to Education’, under the supervision of Dr. Iris Schrijver (University of Antwerp), Dr. Isabelle S. Robert (University of Antwerp) and Dr. Jan-Louis Kruger (Macquarie University). Senne M. Van Hoecke is on the editorial board of the Linguistica Antverpiensia - Themes in Translation Studies journal. He also assists in teaching German consecutive and simultaneous interpreting courses, and an audiovisual translation course. https://orcid.org/0000-0003-0519-576X 

Jan-Louis Kruger is professor of Linguistics at Macquarie University. His research focuses on the processing of language in multimodal contexts, specifically in audiovisual translation, reading, and interpreting. His main approaches are aligned with cognitive psychology and psycholinguistics. Primarily, his projects focus on investigating cognitive processing when more than one source of information has to be integrated, as in the reception of subtitles or the production of interpreting. He is on the editorial board of the Journal of Audiovisual Translation. https://orcid.org/0000-0002-4817-5390


The influence of subtitle-video congruency on subtitle reading: evidence from eye movements

Sixin Liao, Lili Yu, Jan-Louis Kruger, Erik Reichle

Despite the prevalent use of subtitles in educational videos, our knowledge about the mental processes underlying the interaction and integration of subtitles and video content is still limited. To advance our understanding in this sphere, two experiments were conducted to investigate how the congruency between the subtitles and video content might affect the reading of subtitles, and how such effects might be modulated by subtitle speed. In Experiment 1, participants watched six videos from a BBC documentary series with subtitles presented at three different speeds (12 cps, 20 cps, and 28 cps) while their eye movements were recorded. The degree of congruency for each subtitle and its accompanying video content was rated by a different group of participants, with the average congruency rating being treated as a fixed-factor in the eye-movement data analyses. Experiment 2 set out to examine the influence of subtitle-video congruency with more experimental control using a sentence-picture verification paradigm. Participants were presented with 240 short videos that displayed a moving geometric object with a subtitle that either described the video accurately or inaccurately (i.e., true/false subtitles). Participants were asked to judge whether the subtitle described the video accurately and press corresponding key buttons for their response as soon as possible while their eye movements were recorded. Three congruency conditions (high- vs. medium- vs. low- congruency) were defined by manipulating the extent to which the subtitle disambiguated the descriptors for three features of the object, namely color, shape, and orientation. In the high-congruency condition, all three features were described unambiguously (e.g., “a red circle object is moving to the left”), whereas the medium-congruency conditions had one feature described ambiguously (e.g., “a red circle object is moving horizontally”). In the low-congruency condition, two features were described ambiguously (e.g., “a red curved object is moving horizontally”). Participants were exposed to all videos with subtitles at two speeds (12cps and 28 cps). The 12cps condition were always presented prior to the 28cps conditions in order to prevent participants from applying a text-skimming strategy that they might develop at the fast-speed condition to the low-speed condition. All videos within the same speed condition were presented in a randomized order. Preliminary results showed that participants spent proportionally less time reading subtitles and made fewer inter-word regressions when the video content was more congruent with the subtitle. These findings suggest that, when reading in multimodal contexts, readers are constantly assessing the relations between different information sources to inform their decisions about when and where to move their eyes. Implications for research on multimodal reading and multimedia learning will be discussed.

Sixin Liao is a lecturer in Translation Studies at Macquarie University in Australia. She obtained her Master of Research (2018) and PhD (2021) in Linguistics from Macquarie University, after completing a coursework Master (2016) in Translation and Interpreting Studies at Manchester University in the UK. Her PhD project and recent publications in international journals focus on using eye tracking combined with post-hoc measures to understand the reading of subtitles in multimodal contexts such as educational videos. 

Jan-Louis Kruger (presenter) is professor of Linguistics at Macquarie University. His research focuses on the processing of language in multimodal contexts, specifically in audiovisual translation, reading, and interpreting. His main approaches are aligned with cognitive psychology and psycholinguistics. Primarily, his projects focus on investigating cognitive processing when more than one source of information has to be integrated, as in the reception of subtitles or the production of interpreting. He is on the editorial board of the Journal of Audiovisual Translation. https://orcid.org/0000-0002-4817-5390 

Lili Yu is an Associate Lecturer in the Department of Psychology at Macquarie University. Her research uses experimental approaches and a wide range of methodologies (e.g., eye-tracking) to understand the coordination of perceptual and cognitive processes (e.g., vision, attention, memory, language processing) underlying natural reading. She is particularly interested in understanding whether and how various writing systems influence readers’ reading behaviors differently. For example, Chinese texts are visually (and perhaps linguistically) much denser than alphabetic writing systems, due to its complex hierarchy and lack of space. Thus, how might our cognitive system respond to Chinese reading, which potentially needs more severe cognitive demands on early visual processing? Whether these remarkable differences between the first and second languages lead to different learning and reading patterns in the second language? 

Erik Reichle is Professor of Psychology at Macquarie University.  His research uses computational modeling, eye-movement experiments, and other methods (e.g., ERP) to understand the perceptual, cognitive, and motor processes involved in reading.  He has authored more than 60 articles on these topics in international journals, including Brain and Behavioral Sciences, Psychological Review, and Psychological Science.  Ihe has also received fellowships from the Hanse Institute of Advanced Studies (Germany) and the Leverhulme Trust (United Kingdom).  His new book, entitled Computational Models of Reading: A Handbook, provides a comprehensive review of models that are used to understand the mental processes involved in reading.


When accessibility meets multimedia learning: effect of intralingual live subtitling on perception, performance and cognitive load in an EMI university lecture

Yanou Van Gauwbergen, Isabelle Robert and Iris Schrijver

One of the main challenges in higher education in the 21st century is providing educational access to an increasingly multilingual and multicultural student population. Many universities are therefore considering using English as language of instruction (EMI), but students’ limited proficiency in English can be a serious drawback. Live subtitling might help to overcome this language barrier, by removing physical and linguistic barriers at the same time. The aim of this paper presentation is to report on preliminary results of a project that investigates (1) how university students in Flanders perceive EMI lectures with intralingual live subtitles, i.e. lectures for which the words of the lecturer are subtitled in real time in the same language as the speaker (English), (2) whether these subtitles influence their performance, and (3) what impact these subtitles have on their cognitive load. In the study where this paper presentation will be based on, the impact of live subtitling on perception, performance and cognitive load has been investigated during six two-hour Research Skills lectures taught in English to students of Applied Linguistics who have Dutch as their mother tongue. The live subtitling was alternately produced in real time through respeaking and automatic speech recognition, each time during two lecture fragments of approximately 20 minutes (one before the break and one after the break in each lecture). The two different production methods were not used together in the same lecture (i.e., respeaking in week one, automatic speech recognition in week two, etc.). Although respeaking is currently the preferred method for live subtitling, because it is considered to offer the best quality, automatic speech recognition is believed to become the best future production method and is also the cheapest and most feasible option for live subtitles to actually be implemented at universities. In this paper, we use the objective NER accuracy rate to present our findings related to the quality of the live subtitling, the analysis of which is ongoing, and link this to the students’ perception of the two different production methods. Quantitative and qualitative data have been collected using (1) online language tests, consisting of a certified listening test and vocabulary test to determine the students’ English proficiency; (2) an online questionnaire on demographics (e.g., mother tongue and self-reported proficiency in English); (3) online questionnaires after each lecture about the content and the perception of the lecture; (4) eye tracking glasses to measure cognitive load; (5) post-hoc interviews after the series of lectures focusing on the eye tracking experience. The first results indicate an uncertain influence of the live subtitles. We are now analyzing what could be possible explanations for these findings.

Yanou Van Gauwbergen (presenter) is a researcher at the University of Antwerp. Passionate about languages and cultures, he pursued a bachelor’s degree in Applied Linguistics at the University of Antwerp, starting in 2016, studying Dutch, French and Spanish. He spent the first semester of his third bachelor’s year abroad, at the University of Tours, perfecting his French and Spanish language skills, while also studying Italian and German. After graduating cum laude in 2019, he later obtained a master’s in Translation and a master’s in Interpreting, from which he graduated magna cum laude both times in 2020 and 2021 respectively. During his master’s in Translation, he did an internship at the Flemish development cooperation organization Trias, working as a translator Dutch-French-Spanish-English, which later led him to join the association as a volunteer translator. Before starting his PhD in Translation Studies at his alma mater, he gained professional experience as a respeaker at the Flemish private broadcaster VTM, providing (live) SDH subtitles for the news. This allowed him to empathize closely with the research project he is currently working on as a PhD, investigating the effect of intralingual live subtitling on perception, performance, and cognitive load in an EMI university lecture.


15.00 - 15.30: Break


15.30 - 17.00: Session 16 - Subtitling & eye tracking

Watching subtitled videos without sound: evidence from eye-tracking

Valentina Ragni, David Orrego-Carmona, Jan-Louis Kruger

Research shows that when watching video without sound, viewers tend to rely more on the subtitles compared to when watching video with sound (Łuczak 2017, Liao et al. 2022). Interestingly, although the absence of sound resulted in more visual attention to the subtitles (as indicated by a higher number of fixations), it did not negatively affect comprehension. As a recent survey by Verizon Media and Publicis shows, many people watch subtitled videos without sound for a variety of reasons (McCue 2019), thus making it essential to understand how viewers engage with such content. To this end, we conducted an eye-tracking experiment where two distinct cohorts of viewers, namely L1-English speakers and L1-Polish speakers (with knowledge of English as a foreign language) watched two English videos with English subtitles, with and without sound, as their eye movements were recorded with an EyeLink 1000 Plus eye tracker. Using a within-subject design, we study how the absence or presence of sound affects reading behaviour, operationalised as different eye movement measures at both the subtitle- and word-level. In subtitle-level analyses comparing the subtitle area to the video area, we examine fixation duration, fixation number, saccade length, crossovers, and percentage dwell time on the subtitle and video areas. In word-level analyses, we examine the same eye-tracking metrics but on the individual words in the subtitles, as well as regressions, crossovers and word skipping. We are also interested in how various participant-level variables (e.g. working memory, English proficiency) and word-level variables (e.g. word frequency, word length) affect the subtitle reading process. We use mixed-effect modelling to analyse the impact of sound and other predictor covariates (e.g. native vs. non-native status, and L2 English proficiency within the Polish cohort). The research questions considered in this paper are: • How does the presence or absence of sound affect reading behaviour at the word-level in each individual subtitle? • How does the presence or absence of sound impact attention distribution between subtitles and video image? • How do the reading patterns of native L1 English viewers compare to those of L1 Polish viewers? In line with previous research, we predict that the absence of sound will increase dependence on the subtitles, manifested in more and longer fixations, longer dwell times on subtitles, shorter saccades and less skipping of words and subtitles, as well as less visual attention to the images. Our findings will provide timely insights onto the degree to which subtitle reading behaviour is affected by the presence/absence of sound. 

Liao, S., Yu, L., Kruger, J.-L. & Reichle, E. 2021. “The impact of audio on the reading of intralingual versus interlingual subtitles: Evidence from eye movements.”. Applied Psycholinguistics, 1–33. 

Łuczak, K. 2017. "The effects of the language of the soundtrack on film comprehension, cognitive load and subtitle reading patterns. An eye-tracking study." MA Thesis, Institute of Applied Linguistics, University of Warsaw. 

McCue, TJ. 2019. "Verizon Media Says 69 Percent of Consumers Watching Video with Sound off." Forbes, 31 July 2019.

Valentina Ragni (presenter) is currently a Research Fellow based at the University of Warsaw (Poland). She has a PhD in Translation Studies from the University of Leeds, where she used eye-tracking technology to investigate the effects of watching subtitled videos on memory in advanced foreign language learners. Before Poland, she worked at the University of Bristol on a project assessing the impact of productivity-enhancing technologies – such as machine translation and behaviour-tracking tools – on professional translators. She is particularly interested in the cognitive and psychological aspects of translation, both as a learning tool and as a professional practice. She is a member of Subtle – The Subtitlers’ Association (UK), the UK Institute of Translation and Interpreting (ITI), and the European Society for Translation Studies (EST).

David Orrego-Carmona is assistant professor in Translation at the University of Warwick. David's research deals primarily with translation, technologies and users. It analyses how translation technologies empower professional and non-professional translators and how the democratisation of technology allows translation users to become non-professional translators. Using qualitative and quantitative research methods, his work explores the societal affordances and implications of translation and technologies. He is treasurer of ESIST, the European Association for Studies in Screen Translation, associate editor of the journal Translation Spaces and deputy editor of JoSTrans, the Journal of Specialised Translation. https://orcid.org/0000-0001-6459-1813 

Jan-Louis Kruger is professor of Linguistics at Macquarie University. His research focuses on the processing of language in multimodal contexts, specifically in audiovisual translation, reading, and interpreting. His main approaches are aligned with cognitive psychology and psycholinguistics. Primarily, his projects focus on investigating cognitive processing when more than one source of information has to be integrated, as in the reception of subtitles or the production of interpreting. He is on the editorial board of the Journal of Audiovisual Translation. https://orcid.org/0000-0002-4817-5390


Subtitles in VR 360º video. results from an eye-tracking experiment

Krzysztof Krejtz, Marta Brescia-Zapata, Andrew Duchowski, Chris Hughes, Pilar Orero

The three hundred and sixty-degree (360º) immersive videos for Head Mounted Display (HMD) devices offer great potential in providing engaging media experiences. Understanding human behaviour is a fundamental issue and a departing point toward defining quality of experience and quality of service. Many challenges emerge due to the novelty of the format. In addition, the use of Virtual Reality (VR) headsets for long periods of time often causes nausea or motion sickness. These limitations have direct implications when testing user performance on tasks and preferences. There is no denying the central role of user needs, requirements and expectations in the testing stages and system definition. Still, one of the lessons learnt from previous end-user tests on accessibility services in immersive environments (XR) is user skills in VR technology. A mixed-method design should be adopted to overcome the potential limitations of the applied testing methods, combining qualitative and quantitative approaches followed by triangulating the results. A common practice in experimental research on AVT studies is the multidisciplinary approach, fundamental when dealing with the nature of the medium and high impact of technology (Orero et al. 2018). User-centred research is the most popular methodology when defining and designing system requirements for technological solutions. In an ideal world, any system or process should be designed with accessibility in mind from the onset (Mével, 2020; Romero-Fresco, 2013). This will naturally lead to a born accessible system, avoiding expensive and complex afterthought solutions. Previous experiments involving 360º immersive video (Agulló and Orero 2017; Fidyka and Matamala 2018; Agulló and Matamala 2019) also follow a user-centric design but traditional use of focus groups or questionnaires is insufficient in immersive media as the user experience is now acutely personalised. Therefore, while standards are sought for general deployment, they must be developed through evaluation of the individual’s experience. This study aims to further clarify which is the best visualisation for subtitles in immersive environments for all kinds of users. Feedback has been gathered from 73 participants (24 in Barcelona, 24 in Manchester, and 25 in Warsaw) regarding preferences and task load when watching subtitling content in 360º videos, along with additional data based on participants’ eye movements. A new framework (Hughes et al., 2020) that allows for subtitle editing and evaluation in 360º videos will be presented, along with a methodology based on triangulation of metrics, including psycho-physiological process metrics (eye movements), performance metrics (scene comprehension) and subjective self-reports (task-load and preferences). Results show that head-locked coloured subtitles are the preferred option.

Krzysztof Krejtz (presenter) is a psychologist at SWPS University of Social Sciences and Humanities in Warsaw, Poland, where he is leading the Eye Tracking Research Center. His research focuses on Human-Computer Interaction, multimedia learning, and media accessibility. He gave several invited talks at e.g., Max-Planck Institute (Germany), Bergen University (Norway), Lincoln University Nebraska (USA), and Ulm University (Germany). He is a member of the ACM Symposium on Eye Tracking Research and Application (ACM ETRA) Steering Committee and Full Paper Co-Chair for ETRA’22 and ETRA’23. He is leading LEAD-ME COST Action (CA 19142) on media accessibility. 

Marta Brescia-Zapata is a PhD candidate in the Department of Translation, Interpreting and East Asian Studies at the Universitat Autònoma de Barcelona. She holds a BA in Translation and Interpreting from Universidad de Granada and an MA in Audiovisual Translation from UAB. She is a member of the TransMedia Catalonia research group (2017SGR113), where she collaborates in two H2020 projects: TRACTION (Opera co-creation for a social transformation), and GreenScent (Smart Citizen Education for a greeN fuTure). She is currently working on subtitling for the deaf and hard of hearing in immersive media, thanks to a PhD scholarship granted by the Catalan government. She is the Spanish translator of Joel Snyder’s AD manual “The visual made verbal”, and also collaborates regularly as subtitler and audio describer at the Festival INCLÚS. 

Dr Andrew Duchowski is a professor of Visual Computing at Clemson University. He received his baccalaureate (1990) from Simon Fraser University, Burnaby, Canada, and doctorate (1997) from Texas A&M University, College Station, TX, both in Computer Science. His research and teaching interests include visual attention and perception, eye tracking, computer vision, and computer graphics. He joined the School of Computing faculty at Clemson in January, 1998. He has since produced a corpus of publications and a textbook related to eye tracking research, and has delivered courses and seminars on the subject at international conferences. He maintains Clemson's eye tracking laboratory, and teaches a regular course on eye tracking methodology attracting students from a variety of disciplines across campus. 

Dr Chris Hughes is a Lecturer in the School of Computer Science at Salford University, UK. His research is focused heavily on developing computer science solutions to promote inclusivity and diversity throughout the broadcast industry. This aims to ensure that broadcast experiences are inclusive across different languages, addressing the needs of those with hearing and low vision problems, learning difficulties and the aged. He was a partner in the H2020 Immersive Accessibility (ImAc) Project. Previously he worked for the UX group within BBC R&D where he was responsible for developing the concept of responsive subtitles and demonstrated several methods for automatically recovering and phonetically realigning subtitles. He has a particular interest in accessible services and is currently focused on developing new methods for providing accessibility services within an immersive context, such as Virtual Reality and 360º video. 

Professor Pilar Orero, PhD (UMIST, UK) works at Universitat Autònoma de Barcelona (Spain) in the TransMedia Catalonia Lab. She has written and edited many books, near 100 academic papers and almost the same number of book chapters —all on Media Accessibility. Leader and participant on numerous EU funded research projects focusing on media accessibility. She works in standardisation and participates in the UN ITU IRG-AVA - Intersector Rapporteur Group Audiovisual Media Accessibility, ISO and ANEC. She has been working on Immersive Accessibility for the past 4 years first in a project called ImAc, which results are now further developed in TRACTION, MEDIAVERSE, MILE, and has just started to work on green accessibility in GREENSCENT. She leads the EU network LEADME on Media Accessibility.


An online study of exploring audience reception of Chinese impact captions

Xinying Chen

Anyone who has experienced Japanese, South Korean and Chinese light entertainment would notice that ‘the heavy use of texts and graphics is a defining characteristic’ (Maree, 2015, p.171). Typically, the screen displays the programme names, broadcaster logos, and/or the section title of the specific programme in the corners, providing identification. Distinctively, creative text chunks are inserted onto the screen, and they are visually different from conventional subtitles. Termed "impact captions" (Park, 2009, p.160), this novel use of captioning originated in Japan and has since permeated South Korea and China. Chinese impact captions have gained importance since their debut on the reality show "Dad Where Are We Going" in 2013. Despite the popularisation of impact captions in Chinese light entertainment, the audiences, as the end-user, have not been actively involved in existing research. Previous studies on Chinese impact captions have primarily discussed design aspects and software applications from a practitioner's standpoint (Zeng, 2014; Bian, 2017; Dong, 2019). Limited information exists regarding audience reception. Therefore, the present study employs an eye-tracking experiment and questionnaires to gain insights from the audience’s perspective. The study seeks to explore the influence of Chinese impact captions on the audience's eye movements, comprehension, and cognitive load. Two clips from the Chinese variety show "Happy Camp" have been selected for the study. Participants would watch these two clips under different conditions (presence and absence of impact captions) and respond to questions concerning their comprehension, cognitive load and their opinions on the current practice of Chinese impact captioning. While the present study may be ongoing at the time of my presentation, I will share the preliminary findings at the meeting. The pandemic and post-pandemic era have prompted shifts in research methodologies involving human subjects, which has also affected the present study. The study is carried out remotely with the help of online survey tools and webcam-based eye-tracking software. This presentation allows me an opportunity to discuss my experiences in executing an online study, including the technical issues I have been encountering, the challenges in managing access, the difficulties of quality control, and the solutions I have devised.

Xinying Chen is a PhD student at the University of Bristol, UK. Her main research interests include subtitling (especially the novel practices of subtitling), audience reception, and multimodality.


Watching subtitled videos with the sound off: a reception study on viewer engagement, comprehension and preferences

Sonia Szkriba, Agnieszka Szarkowska, Sharon Black

According to a recent survey by Verizon Media and Publicis Media (2019), 69% of people watch videos with the sound off in public places, such as train stations, pubs or gyms (McCue 2019). What’s even more interesting is that 25% of viewers declared that they regularly watch videos with no sound in private. The reasons for viewing video content with the sound off include being in a quiet place or not having headphones. This new way of interacting with subtitled content has not yet received much attention in audiovisual translation research. Previous research into the impact of the presence/absence of sound on subtitle viewing is scarce. An early study by d’Ydewalle et al. (1991) showed that Dutch-speaking students watching a Dutch video with Dutch subtitles engaged more with the subtitles by spending more time in the subtitle area compared to the condition with sound. In a study by Łuczak (2017), Polish viewers watching subtitled videos with the sound off did not achieve lower comprehension scores compared to those who watched the videos with the sound on, regardless of whether they could understand the language spoken in the video (English) or not (Hungarian). However, their self-reported cognitive load, operationalised by three indicators: difficulty, mental effort and frustration, was much higher than in the condition when the sound was absent. We still know very little about how the presence or absence of sound affects viewing. How do hearing viewers engage with subtitled audiovisual content with no sound compared to subtitled videos where all the visual and auditory channels are present? Does removing the sound lead to a drop in comprehension and memory recall, less engagement and lower enjoyment? How does the absence of sound impact on viewers’ cognitive load? And last but not least, why do people decide to watch videos with the sound off? With these questions in mind, we have conducted an experiment with Polish and English native speakers who watched English-language videos with English subtitles with the sound on and off. Using a within-subject design with the presence/absence of sound as the main independent variable, we tested viewers’ engagement, comprehension, recall, cognitive load and enjoyment with a battery of tests. We also conducted semi-structured interviews for completeness (Bryman, 2006); i.e., to “bring together a more comprehensive account of the area of enquiry” (ibid) and gain a more in-depth understanding of participants’ experiences and preferences. Our goal is to better understand how viewers engage with subtitled videos in this new type of viewing situation. 

Bryman, A., 2006. Integrating quantitative and qualitative research: how is it done? Qualitative Research, 6(1), pp.97-113. 

d'Ydewalle, Géry, C. Praet, K. Verfaillie, and J. Van Rensbergen. 1991. "Watching subtitled television: automatic reading behavior." Communication Research 18 (5):650-666. 

Łuczak, Krzysztof. 2017. "The effects of the language of the soundtrack on film comprehension, cognitive load and subtitle reading patterns. An eye-tracking study." MA, Institite of Applied Linguistics, University of Warsaw. 

McCue, TJ. 2019. "Verizon Media Says 69 Percent Of Consumers Watching Video With Sound Off." Forbes, 31 July 2019.

Sonia Szkriba (presenter) is a pre-doctoral researcher at the Doctoral School of Humanities, University of Warsaw and in the international research project "WATCH ME. Watching Viewers Watch Subtitled Videos. Audiovisual and linguistic factors in subtitle processing". She is also a subtitling practitioner, with a particular focus of subtitling for the deaf and hard-of-hearing in theatre. Agnieszka Szarkowska (presenter) is University Professor in the Institute of Applied Linguistics at the University of Warsaw, Head of the research group Audiovisual Translation Lab (AVT Lab), and Honorary Research Associate at University College London. 

Agnieszka Szarkowska (presenter) is University Professor in the Institute of Applied Linguistics at the University of Warsaw, Head of the research group Audiovisual Translation Lab (AVT Lab), and Honorary Research Associate at University College London. Agnieszka is a researcher, academic teacher, ex-translator, and translator trainer. Her research projects include eye tracking studies on subtitling, audio description, multilingualism in subtitling for the deaf and the hard of hearing, and respeaking. Drawing on her passion for teaching, she has co-founded AVT Masterclass, an online platform for professional audiovisual translation education. Agnieszka is a member of the European Association for Studies in Screen Translation (ESIST) and a recipient of the Jan Ivarsson Award 2022.

Sharon Black (presenter) is Lecturer in Interpreting and Translation at the University of East Anglia (UK). Her principal research interests are in audiovisual translation and media accessibility, in particular the reception and cognitive processing of translated audiovisual content, arts and media accessibility, and AVT for children and young people. Sharon is currently leading a British Academy / Leverhulme funded project investigating how deaf and hard of hearing children use subtitles to access videos, and is participating in WATCH ME, an international project studying the reception of subtitles using eye tracking. Sharon was also Co-Investigator on Erasmus+ funded projects Digital Accessibility for You (2019-2021) and Accessible Culture and Training (2015-2018). Sharon is President of the European Association for Studies in Screen Translation (ESIST).

17.00 - 18.30: Session 17 - Subtitling varia

Queer translation community: access to queer possibilities

Boyi Huang

This paper attempts to examine the role of translators, particularly subtitlers, and their communities in LGBT+ movements in Mainland China (MC). While it is neither illegal nor insane to practise homosexuality in MC, sexual minorities in general are still faced with obstinate public discourses that stigmatise them. Moreover, depictions of sexual minorities have been consistently forbidden on mainstream and/or official media (Ellis-Petersen 2016). Such restrictions leave the general public and even the Chinese LGBT+ people themselves little information and language to understand their sexuality in nonheteronormative senses. However, there are a number of queer translation communities that voluntarily translate exclusively queer media content in Chinese and make them available to audiences in MC. Studies of such queer translation communities have shown that such communities’ translations make important contributions to domestic discourse on LGBT+ identities (e.g. Guo & Evans 2020; Guo 2021). Their findings have shed light on the role of translation in Chinese queer politics, calling for further investigation of the role of translator (community) in the overall LGBT+ movements in MC. This study has set out to explore an online queer subtitling group that focuses on editing, distributing and subtitling LGBT+-themed audiovisual programmes for Chinese audiences. It entails a one-year period of ethnographic fieldwork in the group’s online community, where the researcher has gained close contact with the community members and immersive experience in their day-to-day community activities. The preliminary findings of the study indicate that the subtitlers were once pure audiences themselves who escaped from heteronormative realities into the many queer possibilities depicted by LGBT+ audiovisual programmes. Through comprehending possibilities of nonheteronormative sexualities and lifestyles, they experience self-recognition of their own sexualities. They later turned into subtitlers who aim to help other people gain accessibility to such possibilities and experiences, within a semi-closed community outside state media circuit. The paper concludes by arguing that Queer Translation Community is a form of Queer Community Media ‘produced in, by and for queer communities’ (Bao 2021, 12), and that it has provided important means for queer people to re-imagine their identities and communities in MC where the mainstream discourses and physical spaces constraint such experiences.

Boyi Huang is currently a PhD candidate at Dublin City University. His research interests include Audiovisual Translation, Queer Media, Ethnography, and Digital Culture.


Busting ghost titles on streaming services

Jan-Louis Kruger, Sixin Liao

With the proliferation of streaming services and a shift towards on-demand digital media, subtitling has gained renewed prominence. This is the case when non-English content gains international prominence, but also as more and more viewers switch on the subtitles for other reasons. This trend in viewing behaviour is documented in popular articles such as "Young people prefer watching TV with subtitles, study claims" (NME, 15 November 2021, Sam Warner), and "Lights, camera, caption! Why subtitles are no longer just for the hard of hearing" (The Guardian, 22 July, 2019, Hannah J. Davies). One consequence of this broadened user base is that more and more viewers who do not technically need subtitles are using them, and are unhappy when the subtitles do not say exactly what they hear. In this paper we will not focus on such complaints, but rather on a phenomenon that all subtitle users, regardless of their motivation for using subtitles, experience on a regular basis: ghost titles. Ghost titles are those elusive subtitles that make us doubt our sanity. We either spot them in our peripheral vision, only to find them gone by the time our eyes have moved down to start reading, or they disappear while we are still reading. In many cases ghost titles meet the minimum duration and the maximum speed requirements, but they do not take into account the time it takes the eyes to move from the video screen down to the text of the subtitle (latency), or the varying demands of dynamic video images that often take priority when viewers have to decide where to focus. The one-speed-fits-all approach also means that even when subtitles meet the minimum speed requirements, many (mainly shorter) subtitles are not on screen long enough to allow viewers to finish reading them, resulting in frustrating viewing experiences. A recent eyer tracking study (Kruger, Wisniewska, & Liao, 2022) established that fast subtitle speeds result in more words in subtitles being skipped, and in more subtitles not being read to completion, confirming this effect. For example, a small sample of 11 films reported on in that study, had more than 15% of the subtitles presented at speeds faster than 20 cps. In order to determine just how prevalent fast subtitles are on some of the leading streaming services, this paper will present an analysis of subtitle speed based on a corpus of subtitles (around 40 feature films and 60 episodes from series) from a selection of these services. This analysis will include data on the percentage of subtitles faster than 20 cps, the percentage of very short subtitles 1 to 7 characters) shorter than 1 second in duration, and the percentage of short subtitles (8 to 20 characters) that are on screen for less than 1.5 seconds. In addition, we will present eye tracking data on the latency from the moment a subtitle appears on screen until viewers start reading the subtitle, based on data from a total of more than 150 participants in three separate experiments.

Jan-Louis Kruger (presenter) is professor of Linguistics at Macquarie University. His research focuses on the processing of language in multimodal contexts, specifically in audiovisual translation, reading, and interpreting. His main approaches are aligned with cognitive psychology and psycholinguistics. Primarily, his projects focus on investigating cognitive processing when more than one source of information has to be integrated, as in the reception of subtitles or the production of interpreting. He is on the editorial board of the Journal of Audiovisual Translation. https://orcid.org/0000-0002-4817-5390 

Sixin Liao is a lecturer in Translation Studies at Macquarie University in Australia. She obtained her Master of Research (2018) and PhD (2021) in Linguistics from Macquarie University, after completing a coursework Master (2016) in Translation and Interpreting Studies at Manchester University in the UK.  Her PhD project and recent publications in international journals focus on using eye tracking combined with post-hoc measures to understand the reading of subtitles in multimodal contexts such as educational videos.


Human and non-human interaction in film festival subtitling networks

Stavroula Tsiara

Despite their limited number, there are few contributions in audiovisual translation (AVT) and subtitling in particular which have dealt with research questions concerning the subtitler’s profile and agency in networks, such as A. Abdallah (2011), J. Díaz Cintas (2012) and A. Künzli (2017). However, especially with regards to film festival subtitling, there are several aspects not yet sufficiently explored. This empirical research, which is part of broader ongoing research for my PhD thesis on sociological aspects of subtitling, focuses on human agency and human-machine crossover in film festival subtitling in Greece, a long tradition subtitling country, and, therefore, it can hopefully enrich international AVT research. The first area of interest is that, in addition to providing the subtitles, Greek film festival subtitlers manually synchronise them in the cinema venues by using appropriate hardware and software tools, a practice which is similar in other countries, for example Spain (Martínez-Tejerina, 2014). Hence, besides the subtitling company they work for, they cooperate with many festival departments such as projection booths, screenings’ coordination and programming and venue management. Another distinctive aspect is that film festival subtitlers have contact with the audience, and, quite often, the film director or other film collaborators who are also present during the screening. In this way, subtitlers receive direct feedback on their work through spontaneous reactions and comments. Furthermore, the COVID-19 pandemic has brought about significant changes in the film festival circuit; many film festivals, not only in Greece but also internationally, were held online via streaming platforms or applied a hybrid model of screenings both in venues and on platforms (Smits, 2021). That signified new challenges for subtitlers including intense interaction with the festivals’ technical department. The collected data consist of field notes taken during various international film and documentary festivals in Greece during the last three years, personal observations and also experiences as a film festival subtitler. The aforementioned research seeks to depict the subtitlers’ evolving roles and shed light on collaborative practices between the agents as well as on human and non-human interaction in an ever changing digital age, aiming to contribute to the international mapping of the subtitling field. 

Abdallah, K. (2011) Quality Problems in AVT Production Networks: Reconstructing an Actor-network in the Subtitling Industry. In Serban A., Matamala A. and Lavaur J. M. (eds.), Audiovisual Translation in Close-up: Practical and Theoretical Approaches, Bern: Peter Lang, 173-186. 

Díaz Cintas, J. (2012). Subtitling. Theory, Practice and Research. In Millán, C. & Bartrina F. (eds.), The Routledge Handbook of Translation Studies. London: Routledge, 273-287. 

Künzli, Α. (2017). Die Untertitelung – Vor der Produktion zur Rezeption. Berlin: Frank & Timme. Martínez-Tejerina, A. (2014). Subtitling for Film Festivals: Process, Techniques and Challenges, TRANS, 18: 215-225. Retrieved Aug 13, 2021 from: http://www.trans.uma.es/Trans_18/Trans18_215-225_art5.pdf 

Smits, R. (2021). European Films in Transition? Film Festival Formats in Times of COVID. Thessaloniki: Thessaloniki International Film Festival. Retrieved 2 May 2022 from https://www.filmfestival.gr/en/professionals-b2b/research-european-film-festivals-in-transition

Stavroula Tsiara is a PhD candidate in the Department of German Language and Literature of the Aristotle University of Thessaloniki (AUTh). Her PhD thesis pertains to audiovisual translation and, more specifically, socio-technical aspects of film festival subtitling. She has a BA in German Language and Literature, an MA in European Literature and Culture and a Translator’s Certificate of Goethe-Institut Thessaloniki after successfully completing a 2-year Translation Studies Programme. She is currently teaching in the Joint Postgraduate Studies Programme “Conference Interpreting and Translation” of AUTh. Moreover, working as a subtitler, she has been active in the field of international film festival subtitling for the last 15 years, thus combining her academic interests with her professional status.