首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
The wide spread of fake news and its negative impacts on society has attracted a lot of attention to fake news detection. In existing fake news detection methods, particular attention has been paid to the credibility of the users sharing the news on social media, and the news sources based on their level of participation in fake news dissemination. However, these methods have ignored the important role of news topical perspectives (like political viewpoint) in users'/sources' decisions to share/publish the news. These decisions are associated with the viewpoints shared by the echo-chamber that the users belong to, i.e., users' Socio-Cognitive (SC) biases, and the news sources' partisan bias. Therefore, the credibility of users and news sources are varied in different topics according to the mentioned biases; which are completely ignored in current fake news detection studies. In this paper, we propose a Multi-View Co-Attention Network (MVCAN) that jointly models the latent topic-specific credibility of users and news sources for fake news detection. The key idea is to represent news articles, users, and news sources in a way that the topical viewpoints of news articles, SC biases of users which determines the users' viewpoints in sharing news, and the partisan bias of news sources are encoded as vectors. Then a novel variant of the Multi-Head Co-Attention (MHCA) mechanism is proposed to encode the joint interaction from different views, including news-source and news-user to implicitly model the credibility of users and the news sources based on their interaction in real and fake news spreading on the news topic. We conduct extensive experiments on two public datasets. The results show that MVCAN significantly outperforms other state-of-the-art methods and outperforms the best baselines by 3% on average in terms of F1 and Accuracy.  相似文献   

2.
张国标  李洁  胡潇戈 《情报科学》2021,39(10):126-132
【目的/意义】社交媒体在改变新闻传播以及人类获取信息方式的同时,也成为了虚假新闻传播的主要渠 道。因此,快速识别社交媒体中的虚假新闻,扼制虚假信息的传播,对净化网络空间、维护公共安全至关重要。【方 法/过程】为了有效识别社交媒体上发布的虚假新闻,本文基于对虚假新闻内容特征的深入剖析,分别设计了文本 词向量、文本情感、图像底层、图像语义特征的表示方法,用以提取社交网络中虚假新闻的图像特征信息和文本特 征信息,构建多模态特征融合的虚假新闻检测模型,并使用MediaEval2015数据集对模型性能进行效果验证。【结果/ 结论】通过对比分析不同特征组合方式和不同分类方法的实验结果,发现融合文本特征和图像特征的多模态模型 可以有效提升虚假新闻检测效果。【创新/局限】研究从多模态的角度设计了虚假新闻检测模型,融合了文本与图像 的多种特征。然而采用向量拼接来实现特征融合,不仅无法实现各种特征的充分互补,而且容易造成维度灾难。  相似文献   

3.
One of the most time-critical challenges for the Natural Language Processing (NLP) community is to combat the spread of fake news and misinformation. Existing approaches for misinformation detection use neural network models, statistical methods, linguistic traits, fact-checking strategies, etc. However, the menace of fake news seems to grow more vigorous with the advent of humongous and unusually creative language models. Relevant literature reveals that one major characteristic of the virality of fake news is the presence of an element of surprise in the story, which attracts immediate attention and invokes strong emotional stimulus in the reader. In this work, we leverage this idea and propose textual novelty detection and emotion prediction as the two tasks relating to automatic misinformation detection. We re-purpose textual entailment for novelty detection and use the models trained on large-scale datasets of entailment and emotion to classify fake information. Our results correlate with the idea as we achieve state-of-the-art (SOTA) performance (7.92%, 1.54%, 17.31% and 8.13% improvement in terms of accuracy) on four large-scale misinformation datasets. We hope that our current probe will motivate the community to explore further research on misinformation detection along this line. The source code is available at the GitHub.2  相似文献   

4.
Digital information exchange enables quick creation and sharing of information and thus changes existing habits. Social media is becoming the main source of news for end-users replacing traditional media. This also enables the proliferation of fake news, which misinforms readers and is used to serve the interests of the creators. As a result, automated fake news detection systems are attracting attention. However, automatic fake news detection presents a major challenge; content evaluation is increasingly becoming the responsibility of the end-user. Thus, in the present study we used information quality (IQ) as an instrument to investigate how users can detect fake news. Specifically, we examined how users perceive fake news in the form of shorter paragraphs on individual IQ dimensions. We also investigated which user characteristics might affect fake news detection. We performed an empirical study with 1123 users, who evaluated randomly generated stories with statements of various level of correctness by individual IQ dimensions. The results reveal that IQ can be used as a tool for fake news detection. Our findings show that (1) domain knowledge has a positive impact on fake news detection; (2) education in combination with domain knowledge improves fake news detection; and (3) personality trait conscientiousness contributes significantly to fake news detection in all dimensions.  相似文献   

5.
In recent years, fake news detection has been a significant task attracting much attention. However, most current approaches utilize the features from a single modality, such as text or image, while the comprehensive fusion between features of different modalities has been ignored. To deal with the above problem, we propose a novel model named Bidirectional Cross-Modal Fusion (BCMF), which comprehensively integrates the textual and visual representations in a bidirectional manner. Specifically, the proposed model is decomposed into four submodules, i.e., the input embedding, the image2text fusion, the text2image fusion, and the prediction module. We conduct intensive experiments on four real-world datasets, i.e., Weibo, Twitter, Politi, and Gossip. The results show 2.2, 2.5, 4.9, and 3.1 percentage points of improvements in classification accuracy compared to the state-of-the-art methods on Weibo, Twitter, Politi, and Gossip, respectively. The experimental results suggest that the proposed model could better capture integrated information of different modalities and has high generalizability among different datasets. Further experiments suggest that the bidirectional fusions, the number of multi-attention heads, and the aggregating function could impact the performance of the cross-modal fake news detection. The research sheds light on the role of bidirectional cross-modal fusion in leveraging multi-modal information to improve the effect of fake news detection.  相似文献   

6.
Multimodal fake news detection methods based on semantic information have achieved great success. However, these methods only exploit the deep features of multimodal information, which leads to a large loss of valid information at the shallow level. To address this problem, we propose a progressive fusion network (MPFN) for multimodal disinformation detection, which captures the representational information of each modality at different levels and achieves fusion between modalities at the same level and at different levels by means of a mixer to establish a strong connection between the modalities. Specifically, we use a transformer structure, which is effective in computer vision tasks, as a visual feature extractor to gradually sample features at different levels and combine features obtained from a text feature extractor and image frequency domain information at different levels for fine-grained modeling. In addition, we design a feature fusion approach to better establish connections between modalities, which can further improve the performance and thus surpass other network structures in the literature. We conducted extensive experiments on two real datasets, Weibo and Twitter, where our method achieved 83.3% accuracy on the Twitter dataset, which has increased by at least 4.3% compared to other state-of-the-art methods. This demonstrates the effectiveness of MPFN for identifying fake news, and the method reaches a relatively advanced level by combining different levels of information from each modality and a powerful modality fusion method.  相似文献   

7.
Since previous studies in cognitive psychology show that individuals’ affective states can help analyze and predict their future behaviors, researchers have explored emotion mining for predicting online activities, firm profitability, and so on. Existing emotion mining methods are divided into two categories: feature-based approaches that rely on handcrafted annotations and deep learning-based methods that thrive on computational resources and big data. However, neither category can effectively detect emotional expressions captured in text (e.g., social media postings). In addition, the utilization of these methods in downstream explanatory and predictive applications is also rare. To fill the aforementioned research gaps, we develop a novel deep learning-based emotion detector named DeepEmotionNet that can simultaneously leverage contextual, syntactic, semantic, and document-level features and lexicon-based linguistic knowledge to bootstrap the overall emotion detection performance. Based on three emotion detection benchmark corpora, our experimental results confirm that DeepEmotionNet outperforms state-of-the-art baseline methods by 4.9% to 29.8% in macro-averaged F-score. For the downstream application of DeepEmotionNet to a real-world financial application, our econometric analysis highlights that top executives’ emotions of fear and anger embedded in their social media postings are significantly associated with corporate financial performance. Furthermore, these two emotions can significantly improve the predictive power of corporate financial performance when compared to sentiments. To the best of our knowledge, this is the first study to develop a deep learning-based emotion detection method and successfully apply it to enhance corporate performance prediction.  相似文献   

8.
Over the recent years, the growth of online social media has greatly facilitated the way people communicate with each other. Users of online social media share information, connect with other people and stay informed about trending events. However, much recent information appearing on social media is dubious and, in some cases, intended to mislead. Such content is often called fake news. Large amounts of online fake news has the potential to cause serious problems in society. Many point to the 2016 U.S. presidential election campaign as having been influenced by fake news. Subsequent to this election, the term has entered the mainstream vernacular. Moreover it has drawn the attention of industry and academia, seeking to understand its origins, distribution and effects.Of critical interest is the ability to detect when online content is untrue and intended to mislead. This is technically challenging for several reasons. Using social media tools, content is easily generated and quickly spread, leading to a large volume of content to analyse. Online information is very diverse, covering a large number of subjects, which contributes complexity to this task. The truth and intent of any statement often cannot be assessed by computers alone, so efforts must depend on collaboration between humans and technology. For instance, some content that is deemed by experts of being false and intended to mislead are available. While these sources are in limited supply, they can form a basis for such a shared effort.In this survey, we present a comprehensive overview of the finding to date relating to fake news. We characterize the negative impact of online fake news, and the state-of-the-art in detection methods. Many of these rely on identifying features of the users, content, and context that indicate misinformation. We also study existing datasets that have been used for classifying fake news. Finally, we propose promising research directions for online fake news analysis.  相似文献   

9.
In an environment full of disordered information, the media spreads fake or harmful information into the public arena with a speed which is faster than ever before. A news report should ideally be neutral and factual. Excessive personal emotions or viewpoints should not be included. News articles ought not to be intentionally or maliciously written or create a media framing. A harmful news is defined as those explicit or implicit harmful speech in news text that harms people or affects readers’ perception. However, in the current situation, it is difficult to effectively identify and predict fake or harmful news in advance, especially harmful news. Therefore, in this study, we propose a Bidirectional Encoder Representation from Transformers (BERT) based model which applies ensemble learning methods with a text sentiment analysis to identify harmful news, aiming to provide readers with a way to identify harmful news content so as to help them to judge whether the information provided is in a more neutral manner. The working model of the proposed system has two phases. The first phase is collecting harmful news and establishing a development model for analyzing the correlation between text sentiment and harmful news. The second phase is identifying harmful news by analyzing text sentiment with an ensemble learning technique and the BERT model. The purpose is to determine whether the news has harmful intentions. Our experimental results show that the F1-score of the proposed model reaches 66.3%, an increase of 7.8% compared with that of the previous term frequency-inverse document frequency approach which adopts a Lagrangian Support Vector Machine (LSVM) model without using a text sentiment. Moreover, the proposed method achieves a better performance in recognizing various cases of information disorder.  相似文献   

10.
陈杰  马静  李晓峰  郭小宇 《情报科学》2022,40(3):117-125
【目的/意义】本文融合文本和图像的多模态信息进行情感识别,引入图片模态信息进行情感语义增强,旨在 解决单一文本模态信息无法准确判定情感极性的问题。【方法/过程】本文以网民在新浪微博发表的微博数据为实 验对象,提出了一种基于DR-Transformer模型的多模态情感识别算法,使用预训练的DenseNet和RoBERTa模型, 分别提取图片模态和文本模态的情感特征;通过引入Modal Embedding机制,达到标识不同模态特征来源的目的; 采用浅层Transformer Encoder对不同模态的情感特征进行融合,利用Self-Attention机制动态调整各模态信息特征 的权重。【结果/结论】在微博数据集上的实验表明:模型情感识别准确率为 79.84%;相较于基于单一文本、图片模 态的情感分类算法,本模型准确率分别提升了 4.74%、19.05%;相较于对不同模态特征向量进行直接拼接的特征融 合方法,本模型准确率提升了 1.12%。充分说明了本模型在情感识别的问题上具有科学性、合理性、有效性。【创 新/局限】利用 Modal Embedding 和 Self-Attention 机制能够有效的融合多模态信息。微博网络舆情数据集还需进 一步扩充。  相似文献   

11.
Most research on influence maximization focuses on the network structure features of the diffusion process but lacks the consideration of multi-dimensional characteristics. This paper proposes the attributed influence maximization based on the crowd emotion, aiming to apply the user’s emotion and group features to study the influence of multi-dimensional characteristics on information propagation. To measure the interaction effects of individual emotions, we define the user emotion power and the cluster credibility, and propose a potential influence user discovery algorithm based on the emotion aggregation mechanism to locate seed candidate sets. A two-factor information propagation model is then introduced, which considers the complexity of real networks. Experiments on real-world datasets demonstrate the effectiveness of the proposed algorithm. The results outperform the heuristic methods and are almost consistent with the greedy methods yet with improved time performance.  相似文献   

12.
A news article’s online audience provides useful insights about the article’s identity. However, fake news classifiers using such information risk relying on profiling. In response to the rising demand for ethical AI, we present a profiling-avoiding algorithm that leverages Twitter users during model optimisation while excluding them when an article’s veracity is evaluated. For this, we take inspiration from the social sciences and introduce two objective functions that maximise correlation between the article and its spreaders, and among those spreaders. We applied our profiling-avoiding algorithm to three popular neural classifiers and obtained results on fake news data discussing a variety of news topics. The positive impact on prediction performance demonstrates the soundness of the proposed objective functions to integrate social context in text-based classifiers. Moreover, statistical visualisation and dimension reduction techniques show that the user-inspired classifiers better discriminate between unseen fake and true news in their latent spaces. Our study serves as a stepping stone to resolve the underexplored issue of profiling-dependent decision-making in user-informed fake news detection.  相似文献   

13.
Social emotion refers to the emotion evoked to the reader by a textual document. In contrast to the emotion cause extraction task which analyzes the cause of the author's sentiments based on the expressions in text, identifying the causes of social emotion evoked to the reader from text has not been explored previously. Social emotion mining and its cause analysis is not only an important research topic in Web-based social media analytics and text mining but also has a number of applications in multiple domains. As the focus of social emotion cause identification is on analyzing the causes of the reader's emotions elicited by a text that are not explicitly or implicitly expressed, it is a challenging task fundamentally different from the previous research. To tackle this, it also needs a deeper level understanding of the cognitive process underlying the inference of social emotion and its cause analysis. In this paper, we propose the new task of social emotion cause identification (SECI). Inspired by the cognitive structure of emotions (OCC) theory, we present a Cognitive Emotion model Enhanced Sequential (CogEES) method for SECI. Specifically, based on the implications of the OCC model, our method first establishes the correspondence between words/phrases in text and emotional dimensions identified in OCC and builds the emotional dimension lexicons with 1,676 distinct words/phrases. Then, our method utilizes lexicons information and discourse coherence for the semantic segmentation of document and the enhancement of clause representation learning. Finally, our method combines text segmentation and clause representation into a sequential model for cause clause prediction. We construct the SECI dataset for this new task and conduct experiments to evaluate CogEES. Our method outperforms the baselines and achieves over 10% F1 improvement on average, with better interpretability of the prediction results.  相似文献   

14.
Online recommender systems have been shown to be vulnerable to group shilling attacks in which attackers of a shilling group collaboratively inject fake profiles with the aim of increasing or decreasing the frequency that particular items are recommended. Existing detection methods mainly use the frequent itemset (dense subgraph) mining or clustering method to generate candidate groups and then utilize the hand-crafted features to identify shilling groups. However, such two-stage detection methods have two limitations. On the one hand, due to the sensitivity of support threshold or clustering parameters setting, it is difficult to guarantee the quality of candidate groups generated. On the other hand, they all rely on manual feature engineering to extract detection features, which is costly and time-consuming. To address these two limitations, we present a shilling group detection method based on graph convolutional network. First, we model the given dataset as a graph by treating users as nodes and co-rating relations between users as edges. By assigning edge weights and filtering normal user relations, we obtain the suspicious user relation graph. Second, we use principal component analysis to refine the rating features of users and obtain the user feature matrix. Third, we design a three-layer graph convolutional network model with a neighbor filtering mechanism and perform user classification by combining both structure and rating features of users. Finally, we detect shilling groups through identifying target items rated by the attackers according to the user classification results. Extensive experiments show that the classification accuracy and detection performance (F1-measure) of the proposed method can reach 98.92% and 99.92% on the Netflix dataset and 93.18% and 92.41% on the Amazon dataset.  相似文献   

15.
The breeding and spreading of negative emotion in public emergencies posed severe challenges to social governance. The traditional government information release strategies ignored the negative emotion evolution mechanism. Focusing on the information release policies from the perspectives of the government during public emergency events, by using cognitive big data analytics, our research applies deep learning method into news framing framework construction process, and tries to explore the influencing mechanism of government information release strategy on contagion-evolution of negative emotion. In particular, this paper first uses Word2Vec, cosine word vector similarity calculation and SO-PMI algorithms to build a public emergencies-oriented emotional lexicon; then, it proposes a emotion computing method based on dependency parsing, designs an emotion binary tree and dependency-based emotion calculation rules; and at last, through an experiment, it shows that the emotional lexicon proposed in this paper has a wider coverage and higher accuracy than the existing ones, and it also performs a emotion evolution analysis on an actual public event based on the emotional lexicon, using the emotion computing method proposed. And the empirical results show that the algorithm is feasible and effective. The experimental results showed that this model could effectively conduct fine-grained emotion computing, improve the accuracy and computational efficiency of sentiment classification. The final empirical analysis found that due to such defects as slow speed, non transparent content, poor penitence and weak department coordination, the existing government information release strategies had a significant negative impact on the contagion-evolution of anxiety and disgust emotion, could not regulate negative emotions effectively. These research results will provide theoretical implications and technical supports for the social governance. And it could also help to establish negative emotion management mode, and construct a new pattern of the public opinion guidance.  相似文献   

16.
王巍 《情报科学》2021,39(12):126-132
【目的/意义】研究网络用户对于虚假社会新闻的态度,以及不同涉入度条件下的信息行为,分析用户对虚 假信息的接受条件与信任程度,以期为后续深入研究提供思路和参考。【方法/过程】以文献研究为基础,梳理用户 态度及行为影响因素,以精细加工可能性模型为研究框架提出研究假设,构建理论模型。以网络用户相关信息行 为数据作为研究数据来源,采用调查问卷的方法获取 337份有效样本数据,利用 spss软件进行数据处理。【结果/结 论】用户自身能力和涉入度是影响用户网络虚假社会新闻传播行为的主要因素,能力负向影响用户对社会新闻的 信任态度,在用户涉入度较高的情况下信息环境因素与可靠性因素的影响力度有所增幅,信息差异影响较小。【创 新/局限】从用户态度和行为的视角探讨虚假信息传播的影响要素,提高了虚假信息治理的精确程度,在理论上进 一步丰富了ELM模型的应用;样本的分类选择以及精确性方面还有进一步探讨的空间。  相似文献   

17.
The phenomenal spread of fake news online necessitates further research into fake news perception. We stress human factors in misinformation management. This study extends prior research on fake news and media consumption to examine how people perceive fake news. The objective is to understand how news categories and sources influence individuals' perceptions of fake news. Participants (N = 1008) were randomly allocated to six groups in which they evaluated the believability of news from three categories (misinformation, conspiracy, and correction news) coupled with six online news sources whose background (official media, commercial media, and social media) and expertise level varied (the presence or absence of a professional editorial team). Our findings indicated people could distinguish media sources, which have a significant effect on fake news perception. People believed most in conspiracy news and then misinformation included in correction news, demonstrating the backfire of correction news. The significant interaction effects indicate people are more sensitive to misinformation news and show more skepticism toward misinformation on social media. The findings support news literacy that users are capable to leverage credible sources in navigating online news. Meanwhile, challenges of processing correction news require design measures to promote truth-telling news.  相似文献   

18.
19.
This paper focuses on extracting temporal and parent–child relationships between news events in social news. Previous methods have proved that syntactic features are valid. However, most previous methods directly use the static outcomes parsed by syntactic parsing tools, but task-irrelevant or erroneous parses will inevitably degrade the performance of the model. In addition, many implicit higher-order connections that are directly related and critical to tasks are not explicitly exploited. In this paper, we propose a novel syntax-based dynamic latent graph model (SDLG) for this task. Specifically, we first apply a syntactic type-enhanced attention mechanism to assign different weights to different connections in the parsing results, which helps to filter out noisy connections and better fuse the information in the syntactic structures. Next, we introduce a dynamic event pair-aware induction graph to mine the task-related latent connections. It constructs a potential attention matrix to complement and correct the supervised syntactic features, using the semantics of the event pairs as a guide. Finally, the latent graph, together with the syntactic information, is fed into the graph convolutional network to obtain an improved representation of the event to complete relational reasoning. We have conducted extensive experiments on four public benchmarks, MATRES, TCR, HiEve and TB-Dense. The results show that our model outperforms the state-of-the-art model by 0.4%, 1.5%, 3.0% and 1.3% in F1 scores on the four datasets, respectively. Finally, we provide detailed analyses to show the effectiveness of each proposed component.  相似文献   

20.
With the information explosion of news articles, personalized news recommendation has become important for users to quickly find news that they are interested in. Existing methods on news recommendation mainly include collaborative filtering methods which rely on direct user-item interactions and content based methods which characterize the content of user reading history. Although these methods have achieved good performances, they still suffer from data sparse problem, since most of them fail to extensively exploit high-order structure information (similar users tend to read similar news articles) in news recommendation systems. In this paper, we propose to build a heterogeneous graph to explicitly model the interactions among users, news and latent topics. The incorporated topic information would help indicate a user’s interest and alleviate the sparsity of user-item interactions. Then we take advantage of graph neural networks to learn user and news representations that encode high-order structure information by propagating embeddings over the graph. The learned user embeddings with complete historic user clicks capture the users’ long-term interests. We also consider a user’s short-term interest using the recent reading history with an attention based LSTM model. Experimental results on real-world datasets show that our proposed model significantly outperforms state-of-the-art methods on news recommendation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号