共查询到20条相似文献,搜索用时 0 毫秒
1.
Many existing systems for analyzing and summarizing customer reviews about products or service are based on a number of prominent review aspects. Conventionally, the prominent review aspects of a product type are determined manually. This costly approach cannot scale to large and cross-domain services such as Amazon.com, Taobao.com or Yelp.com where there are a large number of product types and new products emerge almost everyday. In this paper, we propose a novel method empowered by knowledge sources such as Probase and WordNet, for extracting the most prominent aspects of a given product type from textual reviews. The proposed method, ExtRA (Extraction of Prominent Review Aspects), (i) extracts the aspect candidates from text reviews based on a data-driven approach, (ii) builds an aspect graph utilizing the Probase to narrow the aspect space, (iii) separates the space into reasonable aspect clusters by employing a set ofproposed algorithms and finally (iv) generates K most prominent aspect terms or phrases which do not overlap semantically automatically without supervision from those aspect clusters. ExtRA extracts high-quality prominent aspects as well as aspect clusters with little semantic overlap by exploring knowledge sources. ExtRA can extract not only words but also phrases as prominent aspects. Furthermore, it is general-purpose and can be applied to almost any type of product and service. Extensive experiments show that ExtRA is effective and achieves the state-of-the-art performance on a dataset consisting of different product types. 相似文献
2.
There is no doubt that scientific discoveries have always brought changes to society. New technologies help solve social problems such as transportation and education, while research brings benefits such as curing diseases and improving food production. Despite the impacts caused by science and society on each other, this relationship is rarely studied and they are often seen as different universes. Previous literature focuses only on a single domain, detecting social demands or research fronts for example, without ever crossing the results for new insights. In this work, we create a system that is able to assess the relationship between social and scholar data using the topics discussed in social networks and research topics. We use the articles as science sensors and humans as social sensors via social networks. Topic modeling algorithms are used to extract and label social subjects and research themes and then topic correlation metrics are used to create links between them if they have a significant relationship. The proposed system is based on topic modeling, labeling and correlation from heterogeneous sources, so it can be used in a variety of scenarios. We make an evaluation of the approach using a large-scale Twitter corpus combined with a PubMed article corpus. In both of them, we work with data of the Zika epidemic in the world, as this scenario provides topics and discussions on both domains. Our work was capable of discovering links between various topics of different domains, which suggests that some of the relationships can be automatically inferred by the sensors. Results can open new opportunities for forecasting social behavior, assess community interest in a scientific subject or directing research to the population welfare. 相似文献
3.
Patent documents contain important research results. However, they are lengthy and rich in technical terminology such that it takes a lot of human efforts for analyses. Automatic tools for assisting patent engineers or decision makers in patent analysis are in great demand. This paper describes a series of text mining techniques that conforms to the analytical process used by patent analysts. These techniques include text segmentation, summary extraction, feature selection, term association, cluster generation, topic identification, and information mapping. The issues of efficiency and effectiveness are considered in the design of these techniques. Some important features of the proposed methodology include a rigorous approach to verify the usefulness of segment extracts as the document surrogates, a corpus- and dictionary-free algorithm for keyphrase extraction, an efficient co-word analysis method that can be applied to large volume of patents, and an automatic procedure to create generic cluster titles for ease of result interpretation. Evaluation of these techniques was conducted. The results confirm that the machine-generated summaries do preserve more important content words than some other sections for classification. To demonstrate the feasibility, the proposed methodology was applied to a real-world patent set for domain analysis and mapping, which shows that our approach is more effective than existing classification systems. The attempt in this paper to automate the whole process not only helps create final patent maps for topic analyses, but also facilitates or improves other patent analysis tasks such as patent classification, organization, knowledge sharing, and prior art searches. 相似文献
4.
海量的网络媒体信息使得人们在有限的时间内难以全面地掌握一些话题的信息,这样容易导致部分重要信息的遗漏。话题检测与追踪技术正是在这种需求下产生的。这种技术可以从庞大的信息集合中快速准确地获取人们感兴趣的内容。近几年,话题检测与追踪技术已成为自然语言处理领域热门的研究方向,它能把大量的信息有效地组织起来,并使用相关技术从中挖掘出有用的信息,用简洁有效的方式让人们了解一个事件或现象中所有细节以及它们之间的相关性。对话题跟踪的研究背景、相关概念、评测方法以及相关技术进行了综述,并总结了当前的相关技术。 相似文献
5.
6.
网络爬虫软件的研究与开发 总被引:1,自引:0,他引:1
作为一种快捷、高效访问网络海量数据的工具,通用搜索引擎自诞生以来备受人们喜爱。然而在设计上它却存在着很多不足,并且随着万维网的快速发展而日益不能满足人们的需求。基于这种背景,用于对网页进行定向抓取的主题爬虫应运而生。主题爬虫的设计理念是利用最少的资源,尽可能快而准确地抓取网络中用户关心的网页,目前已经有着非常广泛的应用。首先,了解主题爬虫提出的历史背景及当前国内外的发展状况,分析与主题爬虫设计相关的技术知识,如HTTP协议、HTML解析、中文分词等。其次,提出使用向量空间模型进行主题相关度计算。为了能够充分利用网页中丰富的启发式信息,综合运用了网页内容分析和网页链接分析技术。最后,基于对主题爬虫设计与实现方法的研究,使用Java开发一个多线程主题爬虫。 相似文献
7.
Reviewer assignment is an important task in many research-related activities, such as conference organization and grant-proposal adjudication. The goal is to assign each submitted artifact to a set of reviewers who can thoroughly evaluate all aspects of the artifact’s content, while, at the same time, balancing the workload of the reviewers. In this paper, we focus on textual artifacts such as conference papers, where both (aspects of) the submitted papers and (expertise areas of) the reviewers can be described with terms and/or topics extracted from the text. We propose a method for automatically assigning a team of reviewers to each submitted paper, based on the clusters of the reviewers’ publications as latent research areas. Our method extends the definition of the relevance score between reviewers and papers using the latent research areas information to find a team of reviewers for each paper, such that each individual reviewer and the team as a whole cover as many paper aspects as possible. To solve the constrained problem where each reviewer has a limited reviewing capacity, we utilize a greedy algorithm that starts with a group of reviewers for each paper and iteratively evolves it to improve the coverage of the papers’ topics by the reviewers’ expertise. We experimentally demonstrate that our method outperforms state-of-the-art approaches w.r.t several standard quality measures. 相似文献
8.
Ding Xiao Yugang Ji Yitong Li Fuzhen Zhuang Chuan Shi 《Information processing & management》2018,54(6):861-873
Aspect mining, which aims to extract ad hoc aspects from online reviews and predict rating or opinion on each aspect, can satisfy the personalized needs for evaluation of specific aspect on product quality. Recently, with the increase of related research, how to effectively integrate rating and review information has become the key issue for addressing this problem. Considering that matrix factorization is an effective tool for rating prediction and topic modeling is widely used for review processing, it is a natural idea to combine matrix factorization and topic modeling for aspect mining (or called aspect rating prediction). However, this idea faces several challenges on how to address suitable sharing factors, scale mismatch, and dependency relation of rating and review information. In this paper, we propose a novel model to effectively integrate Matrix factorization and Topic modeling for Aspect rating prediction (MaToAsp). To overcome the above challenges and ensure the performance, MaToAsp employs items as the sharing factors to combine matrix factorization and topic modeling, and introduces an interpretive preference probability to eliminate scale mismatch. In the hybrid model, we establish a dependency relation from ratings to sentiment terms in phrases. The experiments on two real datasets including Chinese Dianping and English Tripadvisor prove that MaToAsp not only obtains reasonable aspect identification but also achieves the best aspect rating prediction performance, compared to recent representative baselines. 相似文献
9.
Content analysis of e-petitions with topic modeling: How to train and evaluate LDA models? 总被引:1,自引:0,他引:1
Loni Hagen 《Information processing & management》2018,54(6):1292-1307
E-petitions have become a popular vehicle for political activism, but studying them has been difficult because efficient methods for analyzing their content are currently lacking. Researchers have used topic modeling for content analysis, but current practices carry some serious limitations. While modeling may be more efficient than manually reading each petition, it generally relies on unsupervised machine learning and so requires a dependable training and validation process. And so this paper describes a framework to train and validate Latent Dirichlet Allocation (LDA), the simplest and most popular topic modeling algorithm, using e-petition data. With rigorous training and evaluation, 87% of LDA-generated topics made sense to human judges. Topics also aligned well with results from an independent content analysis by the Pew Research Center, and were strongly associated with corresponding social events. Computer-assisted content analysts can benefit from our guidelines to supervise every process of training and evaluation of LDA. Software developers can benefit from learning the demands of social scientists when using LDA for content analysis. These findings have significant implications for developing LDA tools and assuring validity and interpretability of LDA content analysis. In addition, LDA topics can have some advantages over subjects extracted by manual content analysis by reflecting multiple themes expressed in texts, by extracting new themes that are not highlighted by human coders, and by being less prone to human bias. 相似文献
10.
Ximing Li Ang Zhang Changchun Li Jihong Ouyang Yi Cai 《Information processing & management》2018,54(6):1345-1358
Topic models often produce unexplainable topics that are filled with noisy words. The reason is that words in topic modeling have equal weights. High frequency words dominate the top topic word lists, but most of them are meaningless words, e.g., domain-specific stopwords. To address this issue, in this paper we aim to investigate how to weight words, and then develop a straightforward but effective term weighting scheme, namely entropy weighting (EW). The proposed EW scheme is based on conditional entropy measured by word co-occurrences. Compared with existing term weighting schemes, the highlight of EW is that it can automatically reward informative words. For more robust word weight, we further suggest a combination form of EW (CEW) with two existing weighting schemes. Basically, our CEW assigns meaningless words lower weights and informative words higher weights, leading to more coherent topics during topic modeling inference. We apply CEW to Dirichlet multinomial mixture and latent Dirichlet allocation, and evaluate it by topic quality, document clustering and classification tasks on 8 real world data sets. Experimental results show that weighting words can effectively improve the topic modeling performance over both short texts and normal long texts. More importantly, the proposed CEW significantly outperforms the existing term weighting schemes, since it further considers which words are informative. 相似文献
11.
【目的】 提升科技期刊选题策划的质量和创新性。【方法】 通过总结《煤炭科学技术》40年来70余次专题策划案例经验,提出选题策划应从类型策划、内容策划、引读语及版面设计进行系统策划。选题内容的策划可以从会议主题、问卷调研及专家咨询获取的最新科技资讯、网络查询行业科技和政策信息、自然来稿、同行及跨行期刊创新选题借鉴、优秀团队科研方向、行业单位策划活动、中国知网等数据库进行精准选题。【结果】 通过上述选题方案的实施,《煤炭科学技术》策划的专题论文被引频次、网络下载率比同期论文高。【结论】 通过系列选题策划方案的实施,可大幅提升科技期刊选题策划的质量和效率。 相似文献
12.
Information management is the management of organizational processes, technologies, and people which collectively create, acquire, integrate, organize, process, store, disseminate, access, and dispose of the information. Information management is a vast, multi-disciplinary domain that syndicates various subdomains and perfectly intermingles with other domains. This study aims to provide a comprehensive overview of the information management domain from 1970 to 2019. Drawing upon the methodology from statistical text analysis research, this study summarizes the evolution of knowledge in this domain by examining the publication trends as per authors, institutions, countries, etc. Further, this study proposes a probabilistic generative model based on structural topic modeling to understand and extract the latent themes from the research articles related to information management. Furthermore, this study graphically visualizes the variations in the topic prevalences over the period of 1970 to 2019. The results highlight that the most common themes are data management, knowledge management, environmental management, project management, service management, and mobile and web management. The findings also identify themes such as knowledge management, environmental management, project management, and social communication as academic hotspots for future research. 相似文献
13.
Andrea De Mauro Marco Greco Michele Grimaldi Paavo Ritala 《Information processing & management》2018,54(5):807-817
The rapid expansion of Big Data Analytics is forcing companies to rethink their Human Resource (HR) needs. However, at the same time, it is unclear which types of job roles and skills constitute this area. To this end, this study pursues to drive clarity across the heterogeneous nature of skills required in Big Data professions, by analyzing a large amount of real-world job posts published online. More precisely we: 1) identify four Big Data ‘job families’; 2) recognize nine homogeneous groups of Big Data skills (skill sets) that are being demanded by companies; 3) characterize each job family with the appropriate level of competence required within each Big Data skill set. We propose a novel, semi-automated, fully replicable, analytical methodology based on a combination of machine learning algorithms and expert judgement. Our analysis leverages a significant amount of online job posts, obtained through web scraping, to generate an intelligible classification of job roles and skill sets. The results can support business leaders and HR managers in establishing clear strategies for the acquisition and the development of the right skills needed to leverage Big Data at best. Moreover, the structured classification of job families and skill sets will help establish a common dictionary to be used by HR recruiters and education providers, so that supply and demand can more effectively meet in the job marketplace. 相似文献
14.
In this paper, the task of text segmentation is approached from a topic modeling perspective. We investigate the use of two unsupervised topic models, latent Dirichlet allocation (LDA) and multinomial mixture (MM), to segment a text into semantically coherent parts. The proposed topic model based approaches consistently outperform a standard baseline method on several datasets. A major benefit of the proposed LDA based approach is that along with the segment boundaries, it outputs the topic distribution associated with each segment. This information is of potential use in applications such as segment retrieval and discourse analysis. However, the proposed approaches, especially the LDA based method, have high computational requirements. Based on an analysis of the dynamic programming (DP) algorithm typically used for segmentation, we suggest a modification to DP that dramatically speeds up the process with no loss in performance. The proposed modification to the DP algorithm is not specific to the topic models only; it is applicable to all the algorithms that use DP for the task of text segmentation. 相似文献
15.
有效评估药物专利价值有必要考虑制药基础技术细节以及新药专利保护期限较长的特殊性等有关实际,同时,利用机器学习方法开展专利价值评估的研究仍有待进一步完善,因此,针对生物制药产业专利价值评估准确性问题,结合产业技术因素及其专利特点,以及专利价值评估的共性指标和生物制药产业特征与专利技术特点的个性指标,引入自编码器(AE)模型和谱聚类算法(SC)构建专利价值评估算法模型,以药智专利通数据库的相关专利数据为样本进行实证分析,通过提取专利指标特征、专利聚类来进行专利价值评估,并运用支持向量机方法对专利价值进行分类,以验证AE-SC评估模型的有效性.结果表明:AE-SC评估模型通过自编码器提取专利特征后的专利价值聚类准确度优于谱聚类和传统K-means聚类;专利存在年数、药物专利类型、适应证类别等是评价生物制药产业专利价值必要考虑因素. 相似文献
16.
Users’ ability to retweet information has made Twitter one of the most prominent social media platforms for disseminating emergency information during disasters. However, few studies have examined how Twitter’s features can support the different communication patterns that occur during different phases of disaster events. Based on the literature of disaster communication and Media Synchronicity Theory, we identify distinct disaster phases and the two communication types—crisis communication and risk communication—that occur during those phases. We investigate how Twitter’s representational features, including words, URLs, hashtags, and hashtag importance, influence the average retweet time—that is, the average time it takes for retweet to occur—as well as how such effects differ depending on the type of disaster communication. Our analysis of tweets from the 2013 Colorado floods found that adding more URLs to tweets increases the average retweet time more in risk-related tweets than it does in crisis-related tweets. Further, including key disaster-related hashtags in tweets contributed to faster retweets in crisis-related tweets than in risk-related tweets. Our findings suggest that the influence of Twitter’s media capabilities on rapid tweet propagation during disasters may differ based on the communication processes. 相似文献
17.
《Information processing & management》2023,60(2):103215
Traditional topic models are based on the bag-of-words assumption, which states that the topic assignment of each word is independent of the others. However, this assumption ignores the relationship between words, which may hinder the quality of extracted topics. To address this issue, some recent works formulate documents as graphs based on word co-occurrence patterns. It assumes that if two words co-occur frequently, they should have the same topic. Nevertheless, it introduces noise edges into the model and thus hinders topic quality since two words co-occur frequently do not mean that they are on the same topic. In this paper, we use the commonsense relationship between words as a bridge to connect the words in each document. Compared to word co-occurrence, the commonsense relationship can explicitly imply the semantic relevance between words, which can be utilized to filter out noise edges. We use a relational graph neural network to capture the relation information in the graph. Moreover, manifold regularization is utilized to constrain the documents’ topic distributions. Experimental results on a public dataset show that our method is effective at extracting topics compared to baseline methods. 相似文献
18.
In this paper, we propose a multi-strategic matching and merging approach to find correspondences between ontologies based on the syntactic or semantic characteristics and constraints of the Topic Maps. Our multi-strategic matching approach consists of a linguistic module and a Topic Map constraints-based module. A linguistic module computes similarities between concepts using morphological analysis, string normalization and tokenization and language-dependent heuristics. A Topic Map constraints-based module takes advantage of several Topic Maps-dependent techniques such as a topic property-based matching, a hierarchy-based matching, and an association-based matching. This is a composite matching procedure and need not generate a cross-pair of all topics from the ontologies because unmatched pairs of topics can be removed by characteristics and constraints of the Topic Maps. Merging between Topic Maps follows the matching operations. We set up the MERGE function to integrate two Topic Maps into a new Topic Map, which satisfies such merge requirements as entity preservation, property preservation, relation preservation, and conflict resolution. For our experiments, we used oriental philosophy ontologies, western philosophy ontologies, Yahoo western philosophy dictionary, and Wikipedia philosophy ontology as input ontologies. Our experiments show that the automatically generated matching results conform to the outputs generated manually by domain experts and can be of great benefit to the following merging operations. 相似文献
19.
《Research Policy》2019,48(7):1617-1632
This study examines the relationship between gender diversity and research outcomes. Existing research on the topic primarily focuses on how team gender diversity influences scholarly productivity in terms of citations and publication rates. Far less attention has been devoted to the question of how the intellectual contents of research disciplines change as they become more gender diverse. Drawing on a global sample of more than 25,000 management papers, we use natural language processing techniques, correspondence analysis and regression models to illuminate impact-, content- and status-related dimensions of gender diversity in management research. In regression models adjusting for geographical setting, institutional prestige and collaboration patterns, we find no discernable effects of team gender diversity on per-paper scientific impact. In contrast, our analyses converge to yield a broadly consistent pattern of gender-related variations in research focus: women are well-represented in social- and human-centered areas of management, while men comprise the vast majority in areas addressing more technical and operational aspects. Our findings corroborate recent sociological research suggesting that cultural norms and expectations are channeling women and men towards different areas of work and study. We argue that the broadened repertoire of perspectives, values and questions resulting from gender diversity may render management research more responsive to the full gamut of societal needs and expectations. 相似文献