首页 | 本学科首页   官方微博 | 高级检索  
     


FALCoN: Detecting and classifying abusive language in social networks using context features and unlabeled data
Affiliation:1. School of Economics and Management, Xidian University, Xi''an 710126, China;2. School of Information Management, Wuhan University, Wuhan 430072, China;3. Information Retrieval and Knowledge Mining Laboratory, Wuhan University, Wuhan 430072, China;1. Smart Tourism Eudcation Platform (STEP), College of Hotel and Tourism Management, Kyung Hee University, South Korea;2. Department of Tourism, Hospitality and Event Management, University of Florida, USA;3. Information Systems Institute, Leipzig University, Germany;1. College of Big Data and Intelligent Engineering, Yangtze Normal University, Chongqing 408100, China;2. Chongqing Key Laboratory of Computational Intelligence, Chongqing University of Posts and Telecommunications, Chongqing 400065, China;3. College of Computer and Information Science, Southwest University, Chongqing 400715, China
Abstract:Social networks have grown into a widespread form of communication that allows a large number of users to participate in conversations and consume information at any time. The casual nature of social media allows for nonstandard terminology, some of which may be considered rude and derogatory. As a result, a significant portion of social media users is found to express disrespectful language. This problem may intensify in certain developing countries where young children are granted unsupervised access to social media platforms. Furthermore, the sheer amount of social media data generated daily by millions of users makes it impractical for humans to monitor and regulate inappropriate content. If adolescents are exposed to these harmful language patterns without adequate supervision, they may feel obliged to adopt them. In addition, unrestricted aggression in online forums may result in cyberbullying and other dreadful occurrences. While computational linguistics research has addressed the difficulty of detecting abusive dialogues, issues remain unanswered for low-resource languages with little annotated data, leading the majority of supervised techniques to perform poorly. In addition, social media content is often presented in complex, context-rich formats that encourage creative user involvement. Therefore, we propose to improve the performance of abusive language detection and classification in a low-resource setting, using both the abundant unlabeled data and the context features via the co-training protocol that enables two machine learning models, each learning from an orthogonal set of features, to teach each other, resulting in an overall performance improvement. Empirical results reveal that our proposed framework achieves F1 values of 0.922 and 0.827, surpassing the state-of-the-art baselines by 3.32% and 45.85% for binary and fine-grained classification tasks, respectively. In addition to proving the efficacy of co-training in a low-resource situation for abusive language detection and classification tasks, the findings shed light on several opportunities to use unlabeled data and contextual characteristics of social networks in a variety of social computing applications.
Keywords:Abusive language detection  Social networking service  Co-training  Ethical natural language processing
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号