首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 10 毫秒
1.
The struggle of social media platforms to moderate content in a timely manner, encourages users to abuse such platforms to spread vulgar or abusive language, which, when performed repeatedly becomes cyberbullying — a social problem taking place in virtual environments, yet with real-world consequences, such as depression, withdrawal, or even suicide attempts of its victims. Systems for the automatic detection and mitigation of cyberbullying have been developed but, unfortunately, the vast majority of them are for the English language, with only a handful available for low-resource languages. To estimate the present state of research and recognize the needs for further development, in this paper we present a comprehensive systematic survey of studies done so far for automatic cyberbullying detection in low-resource languages. We analyzed all studies on this topic that were available.We investigated more than seventy published studies on automatic detection of cyberbullying or related language in low-resource languages and dialects that were published between around 2017 and January 2023. There are 23 low-resource languages and dialects covered by this paper, including Bangla, Hindi, Dravidian languages and others. In the survey, we identify some of the research gaps of previous studies, which include the lack of reliable definitions of cyberbullying and its relevant subcategories, biases in the acquisition, and annotation of data. Based on recognizing those research gaps, we provide some suggestions for improving the general research conduct in cyberbullying detection, with a primary focus on low-resource languages. Based on those proposed suggestions, we collect and release a cyberbullying dataset in the Chittagonian dialect of Bangla and propose a number of initial ML solutions trained on that dataset. In addition, pre-trained transformer-based the BanglaBERT model was also attempted. We conclude with additional discussions on ethical issues regarding such studies, highlight how our survey improves on similar surveys done in the past, and discuss the usefulness of recently popular AI-enhanced tools for streamlining such scientific surveys.  相似文献   

2.
Hate speech is an increasingly important societal issue in the era of digital communication. Hateful expressions often make use of figurative language and, although they represent, in some sense, the dark side of language, they are also often prime examples of creative use of language. While hate speech is a global phenomenon, current studies on automatic hate speech detection are typically framed in a monolingual setting. In this work, we explore hate speech detection in low-resource languages by transferring knowledge from a resource-rich language, English, in a zero-shot learning fashion. We experiment with traditional and recent neural architectures, and propose two joint-learning models, using different multilingual language representations to transfer knowledge between pairs of languages. We also evaluate the impact of additional knowledge in our experiment, by incorporating information from a multilingual lexicon of abusive words. The results show that our joint-learning models achieve the best performance on most languages. However, a simple approach that uses machine translation and a pre-trained English language model achieves a robust performance. In contrast, Multilingual BERT fails to obtain a good performance in cross-lingual hate speech detection. We also experimentally found that the external knowledge from a multilingual abusive lexicon is able to improve the models’ performance, specifically in detecting the positive class. The results of our experimental evaluation highlight a number of challenges and issues in this particular task. One of the main challenges is related to the issue of current benchmarks for hate speech detection, in particular how bias related to the topical focus in the datasets influences the classification performance. The insufficient ability of current multilingual language models to transfer knowledge between languages in the specific hate speech detection task also remain an open problem. However, our experimental evaluation and our qualitative analysis show how the explicit integration of linguistic knowledge from a structured abusive language lexicon helps to alleviate this issue.  相似文献   

3.
We study the selection of transfer languages for automatic abusive language detection. Instead of preparing a dataset for every language, we demonstrate the effectiveness of cross-lingual transfer learning for zero-shot abusive language detection. This way we can use existing data from higher-resource languages to build better detection systems for low-resource languages. Our datasets are from seven different languages from three language families. We measure the distance between the languages using several language similarity measures, especially by quantifying the World Atlas of Language Structures. We show that there is a correlation between linguistic similarity and classifier performance. This discovery allows us to choose an optimal transfer language for zero shot abusive language detection.  相似文献   

4.
Modern companies generate value by digitalizing their services and products. Knowing what customers are saying about the firm through reviews in social media content constitutes a key factor to succeed in the big data era. However, social media data analysis is a complex discipline due to the subjectivity in text review and the additional features in raw data. Some frameworks proposed in the existing literature involve many steps that thereby increase their complexity. A two-stage framework to tackle this problem is proposed: the first stage is focused on data preparation and finding an optimal machine learning model for this data; the second stage relies on established layers of big data architectures focused on getting an outcome of data by taking most of the machine learning model of stage one. Thus, a first stage is proposed to analyze big and small datasets in a non-big data environment, whereas the second stage analyzes big datasets by applying the first stage machine learning model of. Then, a study case is presented for the first stage of the framework to analyze reviews of hotel-related businesses. Several machine learning algorithms were trained for two, three and five classes, with the best results being found for binary classification.  相似文献   

5.
This article describes in-depth research on machine learning methods for sentiment analysis of Czech social media. Whereas in English, Chinese, or Spanish this field has a long history and evaluation datasets for various domains are widely available, in the case of the Czech language no systematic research has yet been conducted. We tackle this issue and establish a common ground for further research by providing a large human-annotated Czech social media corpus. Furthermore, we evaluate state-of-the-art supervised machine learning methods for sentiment analysis. We explore different pre-processing techniques and employ various features and classifiers. We also experiment with five different feature selection algorithms and investigate the influence of named entity recognition and preprocessing on sentiment classification performance. Moreover, in addition to our newly created social media dataset, we also report results for other popular domains, such as movie and product reviews. We believe that this article will not only extend the current sentiment analysis research to another family of languages, but will also encourage competition, potentially leading to the production of high-end commercial solutions.  相似文献   

6.
Social networks have grown into a widespread form of communication that allows a large number of users to participate in conversations and consume information at any time. The casual nature of social media allows for nonstandard terminology, some of which may be considered rude and derogatory. As a result, a significant portion of social media users is found to express disrespectful language. This problem may intensify in certain developing countries where young children are granted unsupervised access to social media platforms. Furthermore, the sheer amount of social media data generated daily by millions of users makes it impractical for humans to monitor and regulate inappropriate content. If adolescents are exposed to these harmful language patterns without adequate supervision, they may feel obliged to adopt them. In addition, unrestricted aggression in online forums may result in cyberbullying and other dreadful occurrences. While computational linguistics research has addressed the difficulty of detecting abusive dialogues, issues remain unanswered for low-resource languages with little annotated data, leading the majority of supervised techniques to perform poorly. In addition, social media content is often presented in complex, context-rich formats that encourage creative user involvement. Therefore, we propose to improve the performance of abusive language detection and classification in a low-resource setting, using both the abundant unlabeled data and the context features via the co-training protocol that enables two machine learning models, each learning from an orthogonal set of features, to teach each other, resulting in an overall performance improvement. Empirical results reveal that our proposed framework achieves F1 values of 0.922 and 0.827, surpassing the state-of-the-art baselines by 3.32% and 45.85% for binary and fine-grained classification tasks, respectively. In addition to proving the efficacy of co-training in a low-resource situation for abusive language detection and classification tasks, the findings shed light on several opportunities to use unlabeled data and contextual characteristics of social networks in a variety of social computing applications.  相似文献   

7.
Warning: This paper contains abusive samples that may cause discomfort to readers.Abusive language on social media reinforces prejudice against an individual or a specific group of people, which greatly hampers freedom of expression. With the rise of large-scale pre-trained language models, classification based on pre-trained language models has gradually become a paradigm for automatic abusive language detection. However, the effect of stereotypes inherent in language models on the detection of abusive language remains unknown, although this may further reinforce biases against the minorities. To this end, in this paper, we use multiple metrics to measure the presence of bias in language models and analyze the impact of these inherent biases in automatic abusive language detection. On the basis of this quantitative analysis, we propose two different debiasing strategies, token debiasing and sentence debiasing, which are jointly applied to reduce the bias of language models in abusive language detection without degrading the classification performance. Specifically, for the token debiasing strategy, we reduce the discrimination of the language model against protected attribute terms of a certain group by random probability estimation. For the sentence debiasing strategy, we replace protected attribute terms and augment the original text by counterfactual augmentation to obtain debiased samples, and use the consistency regularization between the original data and the augmented samples to eliminate the bias at the sentence level of the language model. The experimental results confirm that our method can not only reduce the bias of the language model in the abusive language detection task, but also effectively improve the performance of abusive language detection.  相似文献   

8.
With the explosion of multilingual content on Web, particularly in social media platforms, identification of languages present in the text is becoming an important task for various applications. While automatic language identification (ALI) in social media text is considered to be a non-trivial task due to the presence of slang words, misspellings, creative spellings and special elements such as hashtags, user mentions etc., ALI in multilingual environment becomes even more challenging task. In a highly multilingual society, code-mixing without affecting the underlying language sense has become a natural phenomenon. In such a dynamic environment, conversational text alone often fails to identify the underlying languages present in the text. This paper proposes various methods of exploiting social conversational features for enhancing ALI performance. Although social conversational features for ALI have been explored previously using methods like probabilistic language modeling, these models often fail to address issues related to code-mixing, phonetic typing, out-of-vocabulary etc. which are prevalent in a highly multilingual environment. This paper differs in the way the social conversational features are used to propose text refinement strategies that are suitable for ALI in highly multilingual environment. The contributions in this paper therefore includes the following. First, this paper analyzes the characteristics of various social conversational features by exploiting language usage patterns. Second, various methods of text refinement suitable for language identification are proposed. Third, the effects of the proposed refinement methods are investigated using various sentence level language identification frameworks. From various experimental observations over three conversational datasets collected from Facebook, Youtube and Twitter social media platforms, it is evident that our proposed method of ALI using social conversational features outperforms the baseline counterparts.  相似文献   

9.
The generation of stereotypes allows us to simplify the cognitive complexity we have to deal with in everyday life. Stereotypes are extensively used to describe people who belong to a different ethnic group, particularly in racial hoaxes and hateful content against immigrants. This paper addresses the study of stereotypes from a novel perspective that involves psychology and computational linguistics both. On the one hand, it describes an Italian social media corpus built within a social psychology study, where stereotypes and related forms of discredit were made explicit through annotation. On the other hand, it provides some lexical analysis, to bring out the linguistic features of the messages collected in the corpus, and experiments for validating this annotation scheme and its automatic application to other corpora in the future. The main expected outcome is to shed some light on the usefulness of this scheme for training tools that automatically detect and label stereotypes in Italian.  相似文献   

10.
Hate speech detection refers broadly to the automatic identification of language that may be considered discriminatory against certain groups of people. The goal is to help online platforms to identify and remove harmful content. Humans are usually capable of detecting hatred in critical cases, such as when the hatred is non-explicit, but how do computer models address this situation? In this work, we aim to contribute to the understanding of ethical issues related to hate speech by analysing two transformer-based models trained to detect hate speech. Our study focuses on analysing the relationship between these models and a set of hateful keywords extracted from the three well-known datasets. For the extraction of the keywords, we propose a metric that takes into account the division among classes to favour the most common words in hateful contexts. In our experiments, we first compared the overlap between the extracted keywords with the words to which the models pay the most attention in decision-making. On the other hand, we investigate the bias of the models towards the extracted keywords. For the bias analysis, we characterize and use two metrics and evaluate two strategies to try to mitigate the bias. Surprisingly, we show that over 50% of the salient words of the models are not hateful and that there is a higher number of hateful words among the extracted keywords. However, we show that the models appear to be biased towards the extracted keywords. Experimental results suggest that fitting models with hateful texts that do not contain any of the keywords can reduce bias and improve the performance of the models.  相似文献   

11.
Building and maintaining favorable social media relationships with customers require that organizations produce quality content that fits customers’ needs. So far, little if any research has conceptualized and measured this kind of content quality in the context of social media. Therefore, this study proposes and empirically examines a new construct, compatible quality of social media content, which adds to knowledge by expanding prior online content quality research beyond the traditional focus on non-fit quality beliefs. The results of structural equation modeling of responses collected from a sample of active social media users reveal at least three worth noting findings. First, the study confirmed the multidimensionality of compatible quality of social media content, which encompasses reflective, stimulated, practiced, and advocated components. Second, there was equivalence in the measurements of the studied constructs and the structural weights of the proposed network of relationships across the different gender and experience subgroups. Third, compatible quality of social media content was found to influence continued interest, active confidence, and feedback openness. The practical and research implications of these results are discussed.  相似文献   

12.
13.
This study furthers investigation into exactly how Social CRM (S-CRM) is different from traditional CRM, and models the interrelationships between its capabilities. It is underpinned in dynamic capabilities theory, to explain how social media, as a resource all organizations use, can lead to differing performance outcomes. It is underpinned in seminal research into traditional CRM, but which does not cater for the disruptive nature of social media. We outline how S-CRM is a second-order dynamic capability consistng of a set of first-order integrative dynamic capabiliies that, when properly interrelated, lead to performance outcomes. We particularly model the role of S-CRM front- and back-office technology capabilities, customer engagement initiatives, and social information processes in driving customer relationship performance. Findings show that S-CRM is different from traditional CRM in a range of ways in the front- and back-offices, and provide a framework for researcher and managers in information systems and marketing to operate at strategic and tactical levels within S-CRM, while being congisant of both.  相似文献   

14.
Quickly and accurately summarizing representative opinions is a key step for assessing microblog sentiments. The Ortony-Clore-Collins (OCC) model of emotion can offer a rule-based emotion export mechanism. In this paper, we propose an OCC model and a Convolutional Neural Network (CNN) based opinion summarization method for Chinese microblogging systems. We test the proposed method using real world microblog data. We then compare the accuracy of manual sentiment annotation to the accuracy using our OCC-based sentiment classification rule library. Experimental results from analyzing three real-world microblog datasets demonstrate the efficacy of our proposed method. Our study highlights the potential of combining emotion cognition with deep learning in sentiment analysis of social media data.  相似文献   

15.
16.
Identifying the emotional causes of mental illnesses is key to effective intervention. Existing emotion-cause analysis approaches can effectively detect simple emotion-cause expressions where only one cause and one emotion exist. However, emotions may often result from multiple causes, implicitly or explicitly, with complex interactions among these causes. Moreover, the same causes may result in multiple emotions. How to model the complex interactions between multiple emotion spans and cause spans remains under-explored. To tackle this problem, a contrastive learning-based framework is presented to detect the complex emotion-cause pairs with the introduction of negative samples and positive samples. Additionally, we developed a large-scale emotion-cause dataset with complex emotion-cause instances based on subreddits associated with mental health. Our proposed approach was compared to prevailing CNN-based, LSTM-based, Transformer-based and GNN-based methods. Extensive experiments have been conducted and the quantifiable outcomes indicate that our proposed solution achieves competitive performance on simple emotion-cause pairs and significantly outperformed baseline methods in extracting complex emotion-cause pairs. Empirical studies further demonstrated that our proposed approach can be used to reveal the emotional causes of mental disorders for effective intervention.  相似文献   

17.
18.
In this work, we propose BERT-WMAL, a hybrid model that brings together information coming from data through the recent transformer deep learning model and those obtained from a polarized lexicon. The result is a model for sentence polarity that manages to have performances comparable with those at the state-of-the-art, but with the advantage of being able to provide the end-user with an explanation regarding the most important terms involved with the provided prediction. The model has been evaluated on three polarity detection Italian dataset, i.e., SENTIPOLC, AGRITREND and ABSITA. While the first contains 7,410 tweets released for training and 2,000 for testing, the second and the third respectively include 1,000 tweets without splitting , and 2,365 reviews for training, 1,171 for testing. The use of lexicon-based information proves to be effective in terms of the F1 measure since it shows an improvement of F1 score on all the observed dataset: from 0.664 to 0.669 (i.e, 0.772%) on AGRITREND, from 0.728 to 0.734 (i.e., 0.854%) on SENTIPOLC and from 0.904 to 0.921 (i.e, 1.873%) on ABSITA. The usefulness of this model not only depends on its effectiveness in terms of the F1 measure, but also on its ability to generate predictions that are more explainable and especially convincing for the end-users. We evaluated this aspect through a user study involving four native Italian speakers, each evaluating 64 sentences with associated explanations. The results demonstrate the validity of this approach based on a combination of weights of attention extracted from the deep learning model and the linguistic knowledge stored in the WMAL lexicon. These considerations allow us to regard the approach provided in this paper as a promising starting point for further works in this research area.  相似文献   

19.
Rumour stance classification, defined as classifying the stance of specific social media posts into one of supporting, denying, querying or commenting on an earlier post, is becoming of increasing interest to researchers. While most previous work has focused on using individual tweets as classifier inputs, here we report on the performance of sequential classifiers that exploit the discourse features inherent in social media interactions or ‘conversational threads’. Testing the effectiveness of four sequential classifiers – Hawkes Processes, Linear-Chain Conditional Random Fields (Linear CRF), Tree-Structured Conditional Random Fields (Tree CRF) and Long Short Term Memory networks (LSTM) – on eight datasets associated with breaking news stories, and looking at different types of local and contextual features, our work sheds new light on the development of accurate stance classifiers. We show that sequential classifiers that exploit the use of discourse properties in social media conversations while using only local features, outperform non-sequential classifiers. Furthermore, we show that LSTM using a reduced set of features can outperform the other sequential classifiers; this performance is consistent across datasets and across types of stances. To conclude, our work also analyses the different features under study, identifying those that best help characterise and distinguish between stances, such as supporting tweets being more likely to be accompanied by evidence than denying tweets. We also set forth a number of directions for future research.  相似文献   

20.
Entrepreneurship has gained widespread attention in the 21st century. This study aims to examine the direct effect of self-perceived creativity on e-entrepreneurship intention and the moderating effect of social media use on the relationship between self-perceived creativity and e-entrepreneurship intention. Data were gathered from 248 Omani entrepreneurs. The findings revealed that the direct effect of self-perceived creativity and social media use significantly influences e-entrepreneurship intention. Notably, the positive impact of self-perceived creativity on e-entrepreneurship intention was moderated by social media use. This study concluded that self-perceived creativity is a crucial factor at the start of the entrepreneurial process, as it contributes to designing new products and services. In addition, this study introduced social media use as a boundary condition on the relationship between self-perceived creativity and e-entrepreneurial intention. Therefore, this study differs from other studies, as it examines whether social media improves opportunities for entrepreneurs and works for discovering and generating opportunities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号