Case studies of hate speech on social media: analysis and automatic detection
7 May 2019
UAntwerp - Stadscampus - C.104 Enter through building B (Prinsstraat 13) or building D (Grote Kauwenberg 18) - Grote Kauwenberg 18 - 2000 Antwerpen (route: UAntwerpen, Stadscampus
2:30 PM - 3:30 PM
Organization / co-organization:
CLiPS colloquium by Sylvia Jaki
- Initially praised for their ability to connect people worldwide, social media have also turned out to be drivers of polarisation and radicalisation. Polarisation can be observed on a daily basis, given the striking amount of offensive and discriminatory communicative acts towards individuals or groups, so-called hate speech. Due to the detrimental effects of hate speech on individuals and online discourses in general, some ground-breaking regulatory measures have been taken against hate speech and fake news recently, such as German NetzDG. In this context, Machine Learning, more specifically automatic detection systems, come into play, as automatic detection, and potentially also removal, is often discussed as an effective strategy to fight hate speech and fake news online.
The first part of this talk will be dedicated to the qualitative and quantitative analysis of communicative acts ranging from uncivil to toxic, in order to provide a general overview of the problem. The second part will focus on automatic hate speech detection – both on its potential and pitfalls as well as challenges that come with it. This talk will be based on three case studies, namely, German right-wing hate speech on Twitter (Jaki & De Smedt 2018), misogynist hate speech in the forum Incels.me (Jaki, De Smedt et al. 2018), and comments on the official Facebook pages of the main political parties as well as their leading candidates before the German federal elections 2017.