A review on abusive content automatic detection: approaches, challenges and opportunities
The increasing use of social media has led to the emergence of a new challenge in the form of abusive content. There are many forms of abusive content such as hate speech, cyberbullying, offensive language, and abusive language. This article will present a review of abusive content automatic detection approaches. Specifically, we are focusing on the recent contributions that were using natural language processing (NLP) technologies to detect the abusive content in social media. Accordingly, we adopt PRISMA flow chart for selecting the related papers and filtering process with some of inclusion and exclusion criteria. Therefore, we select 25 papers for meta-analysis and another 87 papers were cited in this article during the span of 2017–2021. In addition, we searched for the available datasets that are related to abusive content categories in three repositories and we highlighted some points related to the obtained results. Moreover, after a comprehensive review this article propose a new taxonomy of abusive content automatic detection by covering five different aspects and tasks. The proposed taxonomy gives insights and a holistic view of the automatic detection process. Finally, this article discusses and highlights the challenges and opportunities for the abusive content automatic detection problem.
Published inPeerJ Computer Science
VersionVoR (Version of Record)
CitationAlrashidi, B., Jamal, A., Khan, I. and Alkhathlan, A., 2022. A review on abusive content automatic detection: approaches, challenges and opportunities. PeerJ Computer Science, 8, p.e1142.
Cardiff Met Affiliation
- Cardiff School of Technologies