首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
Agile methodologies were introduced in 2001. Since this time, practitioners have applied Agile methodologies to many delivery disciplines. This article explores the application of Agile methodologies and principles to business intelligence delivery and how Agile has changed with the evolution of business intelligence. Business intelligence has evolved because the amount of data generated through the internet and smart devices has grown exponentially altering how organizations and individuals use information. The practice of business intelligence delivery with an Agile methodology has matured; however, business intelligence has evolved altering the use of Agile principles and practices. The Big Data phenomenon, the volume, variety, and velocity of data, has impacted business intelligence and the use of information. New trends such as fast analytics and data science have emerged as part of business intelligence. This paper addresses how Agile principles and practices have evolved with business intelligence, as well as its challenges and future directions.  相似文献   

3.
With the advent of Web 2.0, there exist many online platforms that results in massive textual data production such as social networks, online blogs, magazines etc. This textual data carries information that can be used for betterment of humanity. Hence, there is a dire need to extract potential information out of it. This study aims to present an overview of approaches that can be applied to extract and later present these valuable information nuggets residing within text in brief, clear and concise way. In this regard, two major tasks of automatic keyword extraction and text summarization are being reviewed. To compile the literature, scientific articles were collected using major digital computing research repositories. In the light of acquired literature, survey study covers early approaches up to all the way till recent advancements using machine learning solutions. Survey findings conclude that annotated benchmark datasets for various textual data-generators such as twitter and social forms are not available. This scarcity of dataset has resulted into relatively less progress in many domains. Also, applications of deep learning techniques for the task of automatic keyword extraction are relatively unaddressed. Hence, impact of various deep architectures stands as an open research direction. For text summarization task, deep learning techniques are applied after advent of word vectors, and are currently governing state-of-the-art for abstractive summarization. Currently, one of the major challenges in these tasks is semantic aware evaluation of generated results.  相似文献   

4.
With global reach of over 2 billion active users, the evolution of Social Media (SM) systems has provided organizations with sophisticated tools and technologies for delivering business objectives. Importantly, while marketers and public relations experts have taken leading positions in promotion and advancement of SM, project managers are often tasked with delivering SM systems. In this study, a sample of 127 project managers were asked to evaluate and recommend modes of SM development for six diverse firms using a four-part taxonomy. The results show that firms of varying size can employ narrowly focused and low cost SM development modes to meet their business objectives, with well-resourced firms able to use experimental modes to deliver widespread and higher cost ‘listen and learn’ SM systems. Alternatively, in addition to achieving groundswell promotions and broader business marketing and sales influencing objectives, firms that engage in large scale SM developments can document and implement SM best practices and apply multi-organizational collaborations required for information exchange, customer feedback and experience sharing. These managerial perspectives expose the intrinsic connections between SM systems and information messaging and management within firms. The article builds further into cumulative studies directed at SM systems construction, deployment, and firm capability affordances.  相似文献   

5.
One of the most time-critical challenges for the Natural Language Processing (NLP) community is to combat the spread of fake news and misinformation. Existing approaches for misinformation detection use neural network models, statistical methods, linguistic traits, fact-checking strategies, etc. However, the menace of fake news seems to grow more vigorous with the advent of humongous and unusually creative language models. Relevant literature reveals that one major characteristic of the virality of fake news is the presence of an element of surprise in the story, which attracts immediate attention and invokes strong emotional stimulus in the reader. In this work, we leverage this idea and propose textual novelty detection and emotion prediction as the two tasks relating to automatic misinformation detection. We re-purpose textual entailment for novelty detection and use the models trained on large-scale datasets of entailment and emotion to classify fake information. Our results correlate with the idea as we achieve state-of-the-art (SOTA) performance (7.92%, 1.54%, 17.31% and 8.13% improvement in terms of accuracy) on four large-scale misinformation datasets. We hope that our current probe will motivate the community to explore further research on misinformation detection along this line. The source code is available at the GitHub.2  相似文献   

6.
信息技术的飞速发展给传统图书情报工作带来的机遇和挑战,如何抓住机遇,迎接挑战是专业图书情报机构面临的新课题。本文分析了专业图书情报机构的现状,结合本单位开展信息资源数字化建设的工作实践,对专业院所信息资源数字化建设提出了建议。  相似文献   

7.
The presentation of search results on the web has been dominated by the textual form of document representation. On the other hand, the document’s visual aspects such as the layout, colour scheme, or presence of images have been studied in a limited context with regard to their effectiveness of search result presentation. This article presents a comparative evaluation of textual and visual forms of document representation as additional components of document surrogates. A total of 24 people were recruited for our task-based user study. The experimental results suggest that an increased level of document representation available in the search results can facilitate users’ interaction with a search interface. The results also suggest that the two forms of additional representations are likely beneficial to users’ information searching process in different contexts.  相似文献   

8.
9.
In this study, quantitative measures of the information content of textual material have been developed based upon analysis of the linguistic structure of the sentences in the text. It has been possible to measure such properties as: (1) the amount of information contributed by a sentence to the discourse; (2) the complexity of the information within the sentence, including the overall logical structure and the contributions of local modifiers; (3) the density of information based on the ratio of the number of words in a sentence to the number of information-contributing operators.Two contrasting types of texts were used to develop the measures. The measures were then applied to contrasting sentences within one type of text. The textual material was drawn from narrative patient records and from the medical research literature. Sentences from the records were analyzed by computer and those from the literature were analyzed manually, using the same methods of analysis. The results show that quantitative measures of properties of textual information can be developed which accord with intuitively perceived differences in the informational complexity of the material.  相似文献   

10.
This study aims to compare representations of Japanese personal and corporate name authority data in Japan, South Korea, China (including Hong Kong and Taiwan), and the Library of Congress (LC) in order to identify differences and to bring to light issues affecting name authority data sharing projects, such as the Virtual International Authority File (VIAF). For this purpose, actual data, manuals, formats, and case reports of organizations as research objects were collected. Supplemental e-mail and face-to-face interviews were also conducted. Subsequently, five check points considered to be important in creating Japanese name authority data were set, and the data of each organization were compared from these five perspectives. Before the comparison, an overview of authority control in Chinese, Japanese, Korean-speaking (CJK) countries was also provided. The findings of the study are as follows: (1) the databases of China and South Korea have mixed headings in Kanji and other Chinese characters; (2) few organizations display the correspondence between Kanji and their yomi; (3) romanization is not mandatory in some organizations and is different among organizations; (4) some organizations adopt representations in their local language; and (5) some names in hiragana are not linked with their local forms and might elude a search.  相似文献   

11.
Over recent years, organizations have started to capitalize on the significant use of Big Data and emerging technologies to analyze, and gain valuable insights linked to, decision-making processes. The process of Competitive Intelligence (CI) includes monitoring competitors with a view to delivering both actionable and meaningful intelligence to organizations. In this regard, the capacity to leverage and unleash the potential of big data tools and techniques is one of various significant components of successfully steering CI and ultimately infusing such valuable knowledge into CI strategies. In this paper, the authors aim to examine Big Data applications in CI processes within organizations by exploring how organizations deal with Big Data analytics, and this study provides a context for developing Big Data frameworks and process models for CI in organizations. Overall, research findings have indicated a preference for a rather centralized informal process as opposed to a clear formal structure for CI; the use of basic tools for queries, as opposed to reliance on dedicated methods such as advanced machine learning; and the existence of multiple challenges that companies currently face regarding the use of big data analytics in building organizational CI.  相似文献   

12.
Social networks and many other graphs are attributed, meaning that their nodes are labelled with textual information such as personal data, expertise or interests. In attributed graphs, a common data analysis task is to find subgraphs whose nodes contain a given set of keywords. In many applications, the size of the subgraph should be limited (i.e., a subgraph with thousands of nodes is not desired). In this work, we introduce the problem of compact attributed group (AG) discovery. Given a set of query keywords and a desired solution size, the task is to find subgraphs with the desired number of nodes, such that the nodes are closely connected and each node contains as many query keywords as possible. We prove that finding an optimal solution is NP-hard and we propose approximation algorithms with a guaranteed ratio of two. Since the number of qualifying AGs may be large, we also show how to find approximate top-k AGs with polynomial delay. Finally, we experimentally verify the effectiveness and efficiency of our techniques on real-world graphs.  相似文献   

13.
《普罗米修斯》2012,30(2):187-193
Social media (SM) are fast becoming a locus of disaster-related activities that range from volunteers helping locate disaster victims to actions that are malicious and offensive, from sincere expressions of empathy towards affected communities to consuming disaster imagery for mere entertainment, from recovery support funds being collected to online marketers preying on the attention afforded to a disaster event. Because of the diversity and sheer volume of both relevant and irrelevant information circulating throughout SM, prioritising an affected population’s needs and relevant data is an increasingly complex task. In addition, SM data need to be interpreted as manifestations of social processes related to community resilience, diversity and conflict of interests, and attitudes to particular response strategies. The use of SM in disasters generates a growing need for domain-specific technological solutions that can enhance public interests as well as address the needs of both disaster managers and the affected population. This task requires integrating social sciences into the development of tools that enable disaster SM data detection, filtering, analysis and representation. The aim of this paper is to contribute to a critical-constructive dialogue between social scientists and developers of SM analytic capabilities. In the context of historical, anthropological and sociological research on disaster, this paper outlines concepts of the disaster paradigm, data as a product of social and representational practices, and disaster context, and discusses their heuristic significance for the analysis of disaster SM as a manifestation of social and cultural practices.  相似文献   

14.
Rapid appraisal of damages related to hazard events is of importance to first responders, government agencies, insurance industries, and other private and public organizations. While satellite monitoring, ground-based sensor systems, inspections and other technologies provide data to inform post-disaster response, crowdsourcing through social media is an additional and novel data source. In this study, the use of social media data, principally Twitter postings, is investigated to make approximate but rapid early assessments of damages following a disaster. The goal is to explore the potential utility of using social media data for rapid damage assessment after sudden-onset hazard events and to identify insights related to potential challenges. This study defines a text-based damage assessment scale for earthquake damages, and then develops a text classification model for rapid damage assessment. Although the accuracy remains a challenge compared to ground-based instrumental readings and inspections, the proposed damage assessment model features rapidity with large amounts of data at spatial densities that exceed those of conventional sensor networks. The 2019 Ridgecrest, California earthquake sequence is investigated as a case study.  相似文献   

15.
The standard model (SM) of particle physics, comprised of the unified electroweak and quantum chromodynamic theories, accurately explains almost all experimental results related to the micro-world, and has made a number of predictions for previously unseen particles, most notably the Higgs scalar boson, that were subsequently discovered. As a result, the SM is currently universally accepted as the theory of the fundamental particles and their interactions. However, in spite of its numerous successes, the SM has a number of apparent shortcomings, including: many free parameters that must be supplied by experimental measurements; no mechanism to produce the dominance of matter over antimatter in the universe; and no explanations for gravity, the dark matter in the universe, neutrino masses, the number of particle generations, etc. Because of these shortcomings, there is considerable incentive to search for evidence for new, non-SM physics phenomena that might provide important clues about what a new, beyond the SM theory (BSM) might look like. Although the center-of-mass energies that BESIII can access are far below the energy frontier, searches for new, BSM physics are an important component of its research program. This report reviews some of the highlights from BESIII’s searches for signs of new, BSM physics by: measuring rates for processes that the SM predicts to be forbidden or very rare; searching for non-SM particles such as dark photons; performing precision tests of SM predictions; and looking for violations of the discrete symmetries C and CP in processes for which the SM expectations are immeasurably small.  相似文献   

16.
GPS-enabled devices and social media popularity have created an unprecedented opportunity for researchers to collect, explore, and analyze text data with fine-grained spatial and temporal metadata. In this sense, text, time and space are different domains with their own representation scales and methods. This poses a challenge on how to detect relevant patterns that may only arise from the combination of text with spatio-temporal elements. In particular, spatio-temporal textual data representation has relied on feature embedding techniques. This can limit a model’s expressiveness for representing certain patterns extracted from the sequence structure of textual data. To deal with the aforementioned problems, we propose an Acceptor recurrent neural network model that jointly models spatio-temporal textual data. Our goal is to focus on representing the mutual influence and relationships that can exist between written language and the time-and-place where it was produced. We represent space, time, and text as tuples, and use pairs of elements to predict a third one. This results in three predictive tasks that are trained simultaneously. We conduct experiments on two social media datasets and on a crime dataset; we use Mean Reciprocal Rank as evaluation metric. Our experiments show that our model outperforms state-of-the-art methods ranging from a 5.5% to a 24.7% improvement for location and time prediction.  相似文献   

17.
Management innovation and the consultants who promote and support it are both typically associated with the ‘new’, with departures from the norm and from standard approaches. Indeed, standardization is often seen as an impediment to innovation, especially in the current ‘post-bureaucratic’ era. This article challenges such a view, arguing that consultant-led management innovation is often highly standardized. Based upon qualitative research into internal consultancy in large business organizations, both standardizing agendas and standardized methods are identified from a range of consultant-led management innovation programs. The analysis then points to some of the structural and cultural features of organizations that lead to managers favouring incremental, standardized approaches to change, even if these are often contested. In conclusion, the article points to the need to consider a range of different dimensions in the relationship between standardization and management innovation.  相似文献   

18.
This article introduces a type of uncertainty that resides in textual information and requires epistemic interpretation on the information seeker’s part. Epistemic modality, as defined in linguistics and natural language processing, is a writer’s estimation of the validity of propositional content in texts. It is an evaluation of chances that a certain hypothetical state of affairs is true, e.g., definitely true or possibly true. This research shifts attention from the uncertainty–certainty dichotomy to a gradient epistemic continuum of absolute, high, moderate, low certainty, and uncertainty. An analysis of a New York Times dataset showed that epistemically modalized statements are pervasive in news discourse and they occur at a significantly higher rate in editorials than in news reports. Four independent annotators were able to recognize a gradation on the continuum but individual perceptions of the boundaries between levels were highly subjective. Stricter annotation instructions and longer coder training improved intercoder agreement results. This paper offers an interdisciplinary bridge between research in linguistics, natural language processing, and information seeking with potential benefits to design and implementation of information systems for situations where large amounts of textual information are screened manually on a regular basis, for instance, by professional intelligence or business analysts.  相似文献   

19.
《Research Policy》2019,48(8):103716
Crowdsourcing challenges are fast emerging as an effective tool for solving complex innovation problems. The main strength of the crowdsourcing model is that it brings together a large number of diverse people from all over the world to focus on solving a problem. This openness, however, results in a large number of solutions that are not appropriate, and this inhibits organizations from leveraging the value of crowdsourcing efficiently and effectively. It is therefore essential to identify ways to increase the appropriateness of solutions generated in a crowdsourcing challenge. This paper takes a step towards that by exploring what motivates the crowd to participate in these challenges and how these motivations relate to solution appropriateness. Drawing on data from InnoCentive, one of the largest crowdsourcing platforms for innovation problems, this paper shows that the various types of motivation driving crowd members to participate were related in different ways to the appropriateness of the solutions generated. In particular, intrinsic and extrinsic motivation were positively related to appropriateness whereas for learning and prosocial motivation the relationship was negative. The association between social motivation and appropriateness was not significant. The results have important implications for how to better design crowdsourcing challenges.  相似文献   

20.
大数据时代,传统的装备科技信息研究无论在需求对接、信息采集、数据分析、成果表达、推送方式等各个环节都面临巨大的挑战。文章主要分析了大数据时代装备科技信息研究面临的主要问题,构建大数据时代装备科技信息研究系统总体框架,探讨装备科技信息研究系统的实践应用,提出要以重塑大数据时代装备科技信息研究流程为重点,以装备科技信息研究体系的建设和应用为基础,大力推进装备科技信息研究系统建设。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号