TY - JOUR
T1 - A critical reflection on the use of toxicity detection algorithms in proactive content moderation systems
AU - Warner, Mark
AU - Strohmayer, Angelika
AU - Higgs, Matthew
AU - Coventry, Lynne
N1 - © 2025 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
Data availability statement:
Anonymous workshop transcripts are available on request. Please Email the corresponding author.
PY - 2025/4/1
Y1 - 2025/4/1
N2 - Toxicity detection algorithms, originally designed for reactive content moderation systems, are being deployed into proactive end-user interventions to moderate content. Yet, there has been little critique on the use of these algorithms within this moderation paradigm. We conducted design workshops with four stakeholder groups, asking participants to embed a toxicity detection algorithm into an imagined mobile phone keyboard. This allowed us to critically explore how such algorithms could be used to proactively reduce the sending of toxic content. We found contextual factors such as platform culture and affordances, and scales of abuse, impacting on perceptions of toxicity and effectiveness of the system. We identify different types of end-users across a continuum of intention to send toxic messages, from unaware users, to those that are determined and organised. Finally, we highlight the potential for certain end-user groups to misuse these systems to validate their attacks, to gamify hate, and to manipulate algorithmic models to exacerbate harm.
AB - Toxicity detection algorithms, originally designed for reactive content moderation systems, are being deployed into proactive end-user interventions to moderate content. Yet, there has been little critique on the use of these algorithms within this moderation paradigm. We conducted design workshops with four stakeholder groups, asking participants to embed a toxicity detection algorithm into an imagined mobile phone keyboard. This allowed us to critically explore how such algorithms could be used to proactively reduce the sending of toxic content. We found contextual factors such as platform culture and affordances, and scales of abuse, impacting on perceptions of toxicity and effectiveness of the system. We identify different types of end-users across a continuum of intention to send toxic messages, from unaware users, to those that are determined and organised. Finally, we highlight the potential for certain end-user groups to misuse these systems to validate their attacks, to gamify hate, and to manipulate algorithmic models to exacerbate harm.
U2 - 10.1016/j.ijhcs.2025.103468
DO - 10.1016/j.ijhcs.2025.103468
M3 - Article
SN - 1071-5819
VL - 198
JO - International Journal of Human Computer Studies
JF - International Journal of Human Computer Studies
M1 - 103468
ER -