peerj-cs-1142.pdf (1.38 MB)
Download file

A review on abusive content automatic detection: approaches, challenges and opportunities

Download (1.38 MB)
journal contribution
posted on 21.11.2022, 16:39 authored by Bedour Alrashidi, Amani Jamal, Imtiaz KhanImtiaz Khan, Ali Alkhathlan

 The increasing use of social media has led to the emergence of a new challenge in the form of abusive content. There are many forms of abusive content such as hate speech, cyberbullying, offensive language, and abusive language. This article will present a review of abusive content automatic detection approaches. Specifically, we are focusing on the recent contributions that were using natural language processing (NLP) technologies to detect the abusive content in social media. Accordingly, we adopt PRISMA flow chart for selecting the related papers and filtering process with some of inclusion and exclusion criteria. Therefore, we select 25 papers for meta-analysis and another 87 papers were cited in this article during the span of 2017–2021. In addition, we searched for the available datasets that are related to abusive content categories in three repositories and we highlighted some points related to the obtained results. Moreover, after a comprehensive review this article propose a new taxonomy of abusive content automatic detection by covering five different aspects and tasks. The proposed taxonomy gives insights and a holistic view of the automatic detection process. Finally, this article discusses and highlights the challenges and opportunities for the abusive content automatic detection problem. 

History

Published in

PeerJ Computer Science

Publisher

PeerJ

Version

VoR (Version of Record)

Citation

Alrashidi, B., Jamal, A., Khan, I. and Alkhathlan, A., 2022. A review on abusive content automatic detection: approaches, challenges and opportunities. PeerJ Computer Science, 8, p.e1142.

Electronic ISSN

2376-5992

Cardiff Met Affiliation

  • Cardiff School of Technologies

Cardiff Met Authors

Imtiaz Khan

Copyright Holder

© The Authors

Language

en