Abstract
Good explanations help people understand hateful memes and mitigate sharing. While AI-enabled automatic detection has proliferated, we argue that quality-controlled crowdsourcing can be an effective strategy to offer good explanations for hateful memes. This paper proposes a Generate-Annotate-Revise workflow to crowdsource explanations and presents the results from two user studies. Study 1 evaluated the objective quality of the explanation with three measurements: detailedness, completeness, and accuracy, and suggested that the proposed workflow generated higher quality explanations than the ones from a single-stage workflow without quality control. Study 2 used an online experiment to examine how different explanations affect users' perception. The results from 127 participants demonstrated that people without prior cultural knowledge gained significant perceived understanding and awareness of hateful memes when presented with explanations generated by the proposed multi-stage workflow as opposed to single-stage or machine-generated explanations.
Original language | English |
---|---|
Article number | 117 |
Journal | Proceedings of the ACM on Human-Computer Interaction |
Volume | 7 |
Issue number | CSCW1 |
DOIs | |
State | Published - 16 04 2023 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2023 ACM.
Keywords
- crowdsourcing
- explainable artificial intelligence
- hateful memes
- hateful memes explanation
- user study