首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents a general framework for studying the digital innovation ecosystem. The notion of complex networks offers a conceptual lens to analyze the emergence and evolution of a digital innovation ecosystem. The framework uses digital data and evolutionary community detection analysis for the empirical inquiry of the digital innovation landscape. The proposed framework is applied to the big data ecosystem. Data from Twitter, for a three year period, is processed and analyzed. This study reveals a large number of elements that are diverse in form and capacity, including organizations, concepts (e.g., #analytics, #iot), technologies (e.g., #hadoop), applications (e.g., #healthcare), infrastructures (e.g., #cloud), regulations, professional meetings and associations, tools, and knowledge. These elements and their communities have evolved in the big data ecosystem. The findings highlight the evolution of digital innovation by two mechanisms, variation and selective retention, which are nonlinear and often unpredictable. Implications are presented and potential ways to improve the proposed framework are discussed. The study aims to make both conceptual and methodological contributions to digital innovation research.  相似文献   

2.
Big data analytics associated with database searching, mining, and analysis can be seen as an innovative IT capability that can improve firm performance. Even though some leading companies are actively adopting big data analytics to strengthen market competition and to open up new business opportunities, many firms are still in the early stage of the adoption curve due to lack of understanding of and experience with big data. Hence, it is interesting and timely to understand issues relevant to big data adoption. In this study, a research model is proposed to explain the acquisition intention of big data analytics mainly from the theoretical perspectives of data quality management and data usage experience. Our empirical investigation reveals that a firm's intention for big data analytics can be positively affected by its competence in maintaining the quality of corporate data. Moreover, a firm's favorable experience (i.e., benefit perceptions) in utilizing external source data could encourage future acquisition of big data analytics. Surprisingly, a firm's favorable experience (i.e., benefit perceptions) in utilizing internal source data could hamper its adoption intention for big data analytics.  相似文献   

3.
While open innovation provides a new paradigm to sustain a firm’s competitive advantage, opening up to external knowledge also entails substantial risks of appropriation and opportunism. Building on this “open paradox” framework, this study investigates whether societal trust—a key aspect of informal cultural norms—serves as an effective mechanism in improving relational governance among partners, thereby leading to better collaborative outcomes. Using a novel panel data on co-owned patents across 29 countries, we show that firms in high trust countries are able to produce a higher level of joint output (i.e., co-owned patents). This effect is more pronounced when perceived opportunism is higher (i.e., firms in high-tech industries, or in countries with less disclosure transparency), and when formal contracts are less enforceable (i.e., in countries with relatively weak legal systems). We further show that open innovation is the channel through which societal trust promotes innovative efficiency. Overall, our study establishes societal trust as a key factor in influencing the efficiency of open innovation.  相似文献   

4.
With the popularity of social platforms such as Sina Weibo, Tweet, etc., a large number of public events spread rapidly on social networks and huge amount of textual data are generated along with the discussion of netizens. Social text clustering has become one of the most critical methods to help people find relevant information and provides quality data for subsequent timely public opinion analysis. Most existing neural clustering methods rely on manual labeling of training sets and take a long time in the learning process. Due to the explosiveness and the large-scale of social media data, it is a challenge for social text data clustering to satisfy the timeliness demand of users. This paper proposes a novel unsupervised event-oriented graph clustering framework (EGC), which can achieve efficient clustering performance on large-scale datasets with less time overhead and does not require any labeled data. Specifically, EGC first mines the potential relations existing in social text data and transforms the textual data of social media into an event-oriented graph by taking advantage of graph structure for complex relations representation. Secondly, EGC uses a keyword-based local importance method to accurately measure the weights of relations in event-oriented graph. Finally, a bidirectional depth-first clustering algorithm based on the interrelations is proposed to cluster the nodes in event-oriented graph. By projecting the relations of the graph into a smaller domain, EGC achieves fast convergence. The experimental results show that the clustering performance of EGC on the Weibo dataset reaches 0.926 (NMI), 0.926 (AMI), 0.866 (ARI), which are 13%–30% higher than other clustering methods. In addition, the average query time of EGC clustered data is 16.7ms, which is 90% less than the original data.  相似文献   

5.
In recent years, there has been an increasing number of mitigation procedures against consumer unfairness in personalized rankings. However, the experimental protocols adopted so far for evaluating a mitigation procedure were often fundamentally different (e.g., with respect to the fairness definitions, data sets, data splits, and evaluation metrics) and limited to a narrow set of perspectives (e.g., focusing on a single demographic attribute and/or not reporting any analysis on efficiency). This situation makes it challenging for scientists to consciously decide which mitigation procedure better suits their practical setting. In this paper, we investigated the properties a given mitigation procedure against consumer unfairness should be evaluated on, to provide a more holistic view on its effectiveness. We first identified eight technical properties and evaluated the extent to which existing mitigation procedures against consumer unfairness met these properties, qualitatively and quantitatively (when possible), on two public data sets. Then, we outlined the main trends and open issues emerged from our multi-dimensional analysis and provided key practical recommendations for future research. The source code accompanying this paper is available at https://github.com/jackmedda/Perspective-C-Fairness-RecSys.  相似文献   

6.
This paper examines the relationship between a firm's strategic framework and business environment and the probability of becoming the target of “copying”, differentiated into (i) unauthorized reproduction of its technological product elements or insignia, and (ii) patent and trademark infringement. Based on bivariate and multivariate analyses of survey data, we show patterns of the links between being (legally or illegally) imitated and IP protection (e.g., defensive publishing), general strategy (e.g., selling products abroad or off-shoring R&D activities) and organizational factors (e.g., firm size). Management implications for successful strategies against the different types of being copied are derived.  相似文献   

7.
Although organizational factors related to big data analytics (BDA) and its performance have been studied extensively, the number of failed BDA projects continues to rise. The quality of BDA information is a commonly cited factor in explanations for such failures and could prove key to improving project performance. Using the resource-based view (RBV) lens, data analytics literature, business strategy control, and an empirical setup of two studies based on marketing and information technology managerial data, we draw on the dimensions of the balanced scorecard (BSC) as an integrating framework of BDA organizational factors. Specifically, we tested a model –from two different perspectives– that would explain information quality through analytical talent and organizations' data plan alignment. Results showed that both managers have a different understanding of what information quality is. The characteristics that make marketing a better informer of information quality are identified. In addition, hybrid (embedded) type analyst placements are seen to achieve better performance. Moreover, we add greater theoretical rigour by incorporating the moderating effect of the use of big data analytics in companies. Finally, the BSC provided a greater causal understanding of the resources and capabilities within a data strategy.  相似文献   

8.
This paper has two core purposes. First, building on Nelson and Sampat's work, we outline the social technology conceptual framework and explain why we favour using it to explore two global health initiatives. Second, we discuss the evolution of those initiatives through the lens of the interaction between social technologies, physical technologies and general institutions. Thus we reflect both on evolving conceptual landscapes on the one hand and organisational and institutional terrains on the other.The first section of the paper presents an intellectual journey and outlines our understanding and adoption of the social technology conceptual framework. This framework we argue has a number of advantages over alternative theoretical approaches and perspectives. The second section describes the context in which product development partnerships (PDPs), a type of global health initiative based on a public–private partnership (PPP), have arisen. The third section develops case studies of the International AIDS Vaccine Initiative (IAVI) and the Malaria Vaccines Initiative (MVI) as social technology experiments and looks at the complex dynamics between organisation, management, scientific and R&D success and general institutional environments. We look at these social technologies as having ‘integrator’ and ‘broker’ roles; classifications which we argue are useful in analysing the different roles taken on by these PDPs. In the conclusion we reflect on the useful ways in which the concept of social technologies can shed light on complex and networked initiatives.  相似文献   

9.
Open data aims to unlock the innovation potential of businesses, governments, and entrepreneurs, yet it also harbours significant challenges for its effective use. While numerous innovation successes exist that are based on the open data paradigm, there is uncertainty over the data quality of such datasets. This data quality uncertainty is a threat to the value that can be generated from such data. Data quality has been studied extensively over many decades and many approaches to data quality management have been proposed. However, these approaches are typically based on datasets internal to organizations, with known metadata, and domain knowledge of the data semantics. Open data, on the other hand, are often unfamiliar to the user and may lack metadata. The aim of this research note is to outline the challenges in dealing with data quality of open datasets, and to set an agenda for future research to address this risk to deriving value from open data investments.  相似文献   

10.
Diversification of web search results aims to promote documents with diverse content (i.e., covering different aspects of a query) to the top-ranked positions, to satisfy more users, enhance fairness and reduce bias. In this work, we focus on the explicit diversification methods, which assume that the query aspects are known at the diversification time, and leverage supervised learning methods to improve their performance in three different frameworks with different features and goals. First, in the LTRDiv framework, we focus on applying typical learning to rank (LTR) algorithms to obtain a ranking where each top-ranked document covers as many aspects as possible. We argue that such rankings optimize various diversification metrics (under certain assumptions), and hence, are likely to achieve diversity in practice. Second, in the AspectRanker framework, we apply LTR for ranking the aspects of a query with the goal of more accurately setting the aspect importance values for diversification. As features, we exploit several pre- and post-retrieval query performance predictors (QPPs) to estimate how well a given aspect is covered among the candidate documents. Finally, in the LmDiv framework, we cast the diversification problem into an alternative fusion task, namely, the supervised merging of rankings per query aspect. We again use QPPs computed over the candidate set for each aspect, and optimize an objective function that is tailored for the diversification goal. We conduct thorough comparative experiments using both the basic systems (based on the well-known BM25 matching function) and the best-performing systems (with more sophisticated retrieval methods) from previous TREC campaigns. Our findings reveal that the proposed frameworks, especially AspectRanker and LmDiv, outperform both non-diversified rankings and two strong diversification baselines (i.e., xQuAD and its variant) in terms of various effectiveness metrics.  相似文献   

11.
Recognizing the role of society in the sustainability of payment system innovation through the quadruple helix framework, this study analyzes the causal influence of demand-side financial inclusion indicators on society's uptake of digital payment solutions (DPS) within the regional economy of the Gulf Cooperation Council. To this end, the present study relies on data extracted from Global Findex surveys (in 2014 and 2017), as well as the economic theory of random utility maximization, to model individuals' DPS uptake decisions “ceteris paribus.” The maximum likelihood estimation revealed no gender-based gradient in DPS uptake behaviors; additionally, financial inclusion indicators such as transaction account ownership and debit card ownership did not significantly influence endogenous or exogenous DPS uptake decisions between 2013 and 2017. However, all remaining financial inclusion indicators did significantly influence DPS uptake. Assessing these findings through the lens of open innovation and the ongoing efforts from the Arab Regional Payment System project, which seeks to expand financial inclusion by facilitating access to transaction accounts, there is reasonable evidence to suggest that complementary financial inclusion policies addressing the use dimension of DPS (i.e., extending access to saving and borrowing, along with digital payroll practices for both private and public enterprises) would contribute to more effective policy on financial inclusion in the region.  相似文献   

12.
Multi-feature fusion has achieved gratifying performance in image retrieval. However, some existing fusion mechanisms would unfortunately make the result worse than expected due to the domain and visual diversity of images. As a result, a burning problem for applying feature fusion mechanism is how to figure out and improve the complementarity of multi-level heterogeneous features. To this end, this paper proposes an adaptive multi-feature fusion method via cross-entropy normalization for effective image retrieval. First, various low-level features (e.g., SIFT) and high-level semantic features based on deep learning are extracted. Under each level of feature representation, the initial similarity scores of the query image w.r.t. the target dataset are calculated. Second, we use an independent reference dataset to approximate the tail of the attained initial similarity score ranking curve by cross-entropy normalization. Then the area under the ranking curve is calculated as the indicator of the merit of corresponding feature (i.e., a smaller area indicates a more suitable feature.). Finally, fusion weights of each feature are assigned adaptively by the statistically elaborated areas. Extensive experiments on three public benchmark datasets have demonstrated that the proposed method can achieve superior performance compared with the existing methods, improving the metrics mAP by relatively 1.04% (for Holidays), 1.22% (for Oxf5k) and the N-S by relatively 0.04 (for UKbench), respectively.  相似文献   

13.
As a significant source of knowledge, virtual communities have stimulated interest in knowledge management research. Nonetheless, very few studies to date have examined the demand-side knowledge perspective such as knowledge acquisition in virtual communities. In order to explore the knowledge acquisition process within virtual communities, this study proposes the cognitive selection framework of knowledge acquisition strategy in virtual communities. The proposed framework takes a cognitive perspective, to identify how knowledge recipients select their strategy for acquiring specialized knowledge, emphasizing their cognitive goals (e.g., cognitive replication and innovation) and cognitive motivators (e.g., virtual community self-efficacy, heightened enjoyment, and time resources). Our results suggest that knowledge recipients’ cognitive motivators differentially influence their cognitive goals (cognitive replication and innovation), which, in turn, are related to their selection of knowledge acquisition strategy (static and dynamic acquisition strategy), respectively.  相似文献   

14.
The spreading of misinformation and disinformation is a great problem on microblogs, leading user evaluation of information credibility a critical issue. This study incorporates two message format factors related to multimedia usage on microblogs (vividness and multimedia diagnosticity) with two well-discussed factors for information credibility (i.e., argument quality and source credibility) as a holistic framework to investigate user evaluation of microblog information credibility. Further, the study draws on two-factor theory and its variant three-factor lens to explain the nonlinear effects of the above factors on microblog information credibility. An online survey was conducted to test the proposed framework by collecting data from microblog users. The research findings reveal that for the effects on microblog information credibility: (1) argument quality (a hygiene factor) exerts a decreasing incremental effect; (2) source credibility (a bivalent factor) exerts only a linear effect; and (3) multimedia diagnosticity (a motivating factor) exerts an increasing incremental effect. This study adds to current knowledge about information credibility by proposing an insightful framework to understand the key predictors of microblog information credibility and further examining the nonlinear effects of these predictors.  相似文献   

15.
Grounded in the vast changes to work life (jobs) and home life that people are facing due to the COVID-19 pandemic (hereinafter COVID), this article presents five research directions related to COVID’s impacts on jobs—i.e., job loss, job changes, job outcomes, coping, and support—and five research directions related to COVID’s impact on home life—i.e., home life changes, children, life-related outcomes, social life, and support. In addition to this, I discuss overarching possible research directions and considerations for researchers, editors, and reviewers, as we continue our scientific journey to support people through this pandemic and beyond. I organize these directions and considerations into two sets of five each: focal groups that should be studied—i.e., underprivileged populations, different countries and cultural contexts, women (vs. men), workers in healthcare (frontline workers), elderly and at-risk—and five general issues and special considerations—i.e., role of technology as the oxygen, pre- vs. mid- vs. post-COVID studies, constraints on data collection/research due to COVID, evolution of COVID, and focus on contextualization (generalizability is irrelevant).  相似文献   

16.
Cross-Company Churn Prediction (CCCP) is a domain of research where one company (target) is lacking enough data and can use data from another company (source) to predict customer churn successfully. To support CCCP, the cross-company data is usually transformed to a set of similar normal distribution of target company data prior to building a CCCP model. However, it is still unclear which data transformation method is most effective in CCCP. Also, the impact of data transformation methods on CCCP model performance using different classifiers have not been comprehensively explored in the telecommunication sector. In this study, we devised a model for CCCP using data transformation methods (i.e., log, z-score, rank and box-cox) and presented not only an extensive comparison to validate the impact of these transformation methods in CCCP, but also evaluated the performance of underlying baseline classifiers (i.e., Naive Bayes (NB), K-Nearest Neighbour (KNN), Gradient Boosted Tree (GBT), Single Rule Induction (SRI) and Deep learner Neural net (DP)) for customer churn prediction in telecommunication sector using the above mentioned data transformation methods. We performed experiments on publicly available datasets related to the telecommunication sector. The results demonstrated that most of the data transformation methods (e.g., log, rank, and box-cox) improve the performance of CCCP significantly. However, the Z-Score data transformation method could not achieve better results as compared to the rest of the data transformation methods in this study. Moreover, it is also investigated that the CCCP model based on NB outperform on transformed data and DP, KNN and GBT performed on the average, while SRI classifier did not show significant results in term of the commonly used evaluation measures (i.e., probability of detection, probability of false alarm, area under the curve and g-mean).  相似文献   

17.
Data exchange is the problem of taking data structured under a source schema and creating an instance of a target schema, by following a mapping between the two schemas. There is a rich literature on problems related to data exchange, e.g., the design of a schema mapping language, the consistency of schema mappings, operations on mappings, and query answering over mappings. Data exchange is extensively studied on relational model, and is also recently discussed for XML data. This article investigates the construction of target instance for XML data exchange, which has received far less attention. We first present a rich language for the definition of schema mappings, which allow one to use various forms of document navigation and specify conditions on data values. Given a schema mapping, we then provide an algorithm to construct a canonical target instance. The schema mapping alone is not adequate for expressing target semantics, and hence, the canonical instance is in general not optimal. We recognize that target constraints play a crucial role in the generation of good solutions. In light of this, we employ a general XML constraint model to define target constraints. Structural constraints and keys are used to identify a certain entity, as rules for data merging. Moreover, we develop techniques to enforce non-key constraints on the canonical target instance, by providing a chase method to reason about data. Experimental results show that our algorithms scale well, and are effective in producing target instances of good quality.  相似文献   

18.
In this work, we elaborate on the meaning of metadata quality by surveying efforts and experiences matured in the digital library domain. In particular, an overview of the frameworks developed to characterize such a multi-faceted concept is presented. Moreover, the most common quality-related problems affecting metadata both during the creation and the aggregation phase are discussed together with the approaches, technologies and tools developed to mitigate them. This survey on digital library developments is expected to contribute to the ongoing discussion on data and metadata quality occurring in the emerging yet more general framework of data infrastructures.  相似文献   

19.
A general method is presented to construct ordered similarity measures (OS-measures), i.e., similarity measures for ordered sets of documents (as, e.g., being the result of an IR-process), based on classical, well-known similarity measures for ordinary sets (measures such as Jaccard, Dice, Cosine or overlap measures). To this extent, we first present a review of these measures and their relationships.The method given here to construct OS-measures extends the one given by Michel in a previous paper so that it becomes applicable on any pair of ordered sets. Concrete expressions of this method, applied to the classical similarity measures, are given.Some of these measures are then tested in the IR-system Profil-Doc. The engine SPIRIT© extracts ranked document sets in three different contexts, each for 550 requests. The practical usability of the OS-measures is then discussed based on these experiments.  相似文献   

20.
In large companies, whose business is critically dependent on the effectiveness of their R&D function, the provision of effective means to access and share all forms of technical information is an acute problem. It is often easier to repeat an activity than it is to determine whether work has been carried out before.In this paper we present experiences in implementing and evaluating the MEMOIR system. MEMOIR is an open framework, i.e., it is extensible and adaptable to an organization’s infrastructure and applications, and it provides its user interface via standard Web browsers. It uses trails, open hypermedia link services and a set of software agents to assist users in accessing and navigating vast amounts of information in Intranet environments. Additionally, MEMOIR exploits trail data to support users in finding colleagues with similar interests. The MEMOIR system has been installed and evaluated by two end-user organizations. This paper describes the results obtained in this evaluation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号