首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper presents a hybrid approach to distributed mutual exclusion in which two algorithms are combined such that one minimizes message traffic and the other minimizes time delay. In a hybrid approach, sites are divided into groups, and two different algorithms are used to resolve local (intra-group) and global (inter-group) conflicts. In this paper, we develop a hybrid distributed mutual exclusion algorithm which uses Singhal's dynamic information structure algorithm [15] as the local algorithm to minimize time delay and Maekawa's algorithm [7] as the global algorithm to minimize message traffic. Compared to Maekawa's algorithm which needs O(√N) messages, but two time units delay between successive executions of the Critical Section (CS) (where N is the number of sites in the system), the proposed hybrid algorithm can reduce message traffic by 52% and time delay by 29% at the same time.  相似文献   

2.
3.
ABSTRACT

It has been established that much of the assistance provided to library patrons at reference desks does not require the knowledge and expertise of a professional librarian. Is the same thing true of the telephone calls received at reference desks in academic libraries? This article reports the results of one academic librarian's reference desk transaction log analysis which focuses on the categories of assistance provided to patrons who called the library's reference desk.  相似文献   

4.
In this paper, we present Waves, a novel document-at-a-time algorithm for fast computing of top-k query results in search systems. The Waves algorithm uses multi-tier indexes for processing queries. It performs successive tentative evaluations of results which we call waves. Each wave traverses the index, starting from a specific tier level i. Each wave i may insert only those documents that occur in that tier level into the answer. After processing a wave, the algorithm checks whether the answer achieved might be changed by successive waves or not. A new wave is started only if it has a chance of changing the top-k scores. We show through experiments that such lazy query processing strategy results in smaller query processing times when compared to previous approaches proposed in the literature. We present experiments to compare Waves’ performance to the state-of-the-art document-at-a-time query processing methods that preserve top-k results and show scenarios where the method can be a good alternative algorithm for computing top-k results.  相似文献   

5.
《Research Strategies》2001,18(3):227-238
Over the past 30 years, researchers have asked how women learn and how they fit their learning into epistemological, or knowledge, structures. Yet no one has thoroughly related women's stages of knowledge to the Association of College and Research Libraries' (ACRL's) Information Literacy Competency Standards for Higher Education. This article surveys key models of intellectual development, particularly those that have investigated gender differences. It then asks how those woman-centered models might be used to re-read the ACRL's Information Literacy Competency Standards for Higher Education and suggests some possible instructional strategies to ensure that varying stages of development are taken into account. Finally, it suggests directions for further research.  相似文献   

6.
The aim of this brief communication is to reply to a letter by Kosmulski (Journal of Informetrics 6(3):368–369, 2012), which criticizes a recent indicator called “success-index”. The most interesting features of this indicator, presented in Franceschini et al. (Scientometrics, in press), are: (i) allowing the selection of an “elite” subset from a set of publications and (ii) implementing the field-normalization at the level of an individual publication. We show that the Kosmulski's criticism is unfair and inappropriate, as it is the result of a misinterpretation of the indicator.  相似文献   

7.
How does educational stage affect the way people find information? In previous research using the Digital Visitors & Residents (V&R) framework for semi-structured interviews, context was a factor in how individuals behaved. This study of 145 online, open-ended surveys examines the impact that one's V&R educational stage has on the likelihood of attending to digital and human sources across four contexts. These contexts vary according to whether the search was professional or personal and successful or struggled. The impact of educational stage differs based on context. In some contexts, people at higher educational stages are more likely to attend to digital sources and less likely to attend to human sources. In other contexts, there is no statistically significant difference (p < 0.10) among educational stages. These findings provide support for previous V&R research, while also demonstrating that online surveys can be used to supplement and balance the data collected from semi-structured interviews.  相似文献   

8.
Studies show that digital skills and literacy training programs for older adults can help to extend digital inclusion, which remains a policy challenge around the world. However existing research provides little insight into how policy-makers can best deliver large-scale programs. This article examines the design and implementation of a nation-wide, state-led digital skills and literacy program in Australia called Be Connected that aimed to empower older adults (50 years and older) to thrive in the digital world. The article combines an exploratory survey (n = 201) with semi-structured interviews of training providers (n = 19) and draws on public management concepts of metagovernance and governance networks to explain and contextualise the program's model of implementation. It explains how policy makers and community-based organisations can successfully address the digital literacy needs and interests of older adults through a metagovernance model. We argue that the effectiveness of the model relies on finding balance between a) provision of standardised resources versus customised support, and b) achieving cohesion through shared goals whilst also promoting the diversity and independence of local organisations. An effective balance can be achieved through processes of co-creation.  相似文献   

9.
The fuzzy Boolean neural network classifier of sampled and digitalized characters is described. The binary values are used both for inputs and outputs. The learning of the circuit with a set of patterns is done by modified algorithms used in AND-OR networks. By relegating these operations to simple comparisons and additions, the resulting learning algorithm becomes extremely efficient. As the sample of the patterns comes serially from its source, in this way they are also processed. The use of the fuzzy set approach to pattern classification provides a degree of matching of learned and tested patterns through the membership function f of class ci. This paper explains the realization of a Boolean classifier. It provides several examples of classification of patterns scanned with different resolutions and learned with a membership function which demonstrates the quality and simplicity of the fuzzy Boolean classifier.  相似文献   

10.
E-resources acquisition is a prevalent topic in the global economic crisis. To ensure the continuity of e-resources, librarians venture into various approaches, including evidence-based librarianship (EBL). This study reports librarians' concerns about EBL implementation during the acquisition process. The Concern-Based Adoption Model (CBAM) tools, including the modified Stages of Concern Questionnaire (SoCQ) and the Quick Scoring Device, were used to measure the individual librarian stages of concern. The results indicate that the scores for librarians' concerns are at the peak in stage 2 (Self), followed by stage 5 (Collaboration), stage 3 (Management), stage 1 (Informational), and stage 6 (Refocusing), with the lowest score at stage 0 (Unconcern). The findings demonstrate that librarians are more concerned about how EBL implementation could affect themselves (as in Stage 2 (Self)) in performing their tasks as librarians. The results are significant in providing perspectives on individual librarians' sensitivity to EBL implementation as an innovation in their work processes.  相似文献   

11.
12.
13.
14.
This paper proposes to use random walk (RW) to discover the properties of the deep web data sources that are hidden behind searchable interfaces. The properties, such as the average degree and population size of both documents and terms, are of interests to general public, and find their applications in business intelligence, data integration and deep web crawling. We show that simple RW can outperform the uniform random (UR) samples disregarding the high cost of UR sampling. We prove that in the idealized case when the degrees follow Zipf’s law, the sample size of UR sampling needs to grow in the order of O(N/ln 2 N) with the corpus size N, while the sample size of RW sampling grows logarithmically. Reuters corpus is used to demonstrate that the term degrees resemble power law distribution, thus RW is better than UR sampling. On the other hand, document degrees have lognormal distribution and exhibit a smaller variance, therefore UR sampling is slightly better.  相似文献   

15.
16.
This paper makes a case for the practicality of Roger's Innovation Diffusion theory. [Rogers, E. (1962). Diffusion of innovations. New York: The Free Press; Rogers, E. (1995). Diffusion of innovations. New York: The Free Press] By using Roger's Innovation Diffusion theory, the paper explores the innovation process from the development stage towards the diffusion stage (the stage of commercialization) of the two major research funding organizations in Thailand: the National Science and Technology Development Agency (NSTDA) and the Thailand Research Fund (TRF). Theoretical and empirical analysis are attempted, focusing on the relation between the management of research and development (R&D) projects and the level of innovation diffusion. The empirical results can help R&D managers manage the projects to contribute to technological development in industry.  相似文献   

17.
This study proposes a network-based model with two parameters to find influential authors based on the idea that the prestige of a whole network changes when a node is removed. We apply the Katz–Bonacich centrality to define network prestige, which agrees with the idea behind the PageRank algorithm. We further deduce a concise mathematical formula to calculate each author's influence score to find the influential ones. Furthermore, the functions of two parameters are revealed by the analysis of simulation and the test on the real-world data. Parameter α provides useful information exogenous to the established network, and parameter β measures the robustness of the result for cases in which the incompleteness of the network is considered. On the basis of the coauthor network of Paul Erdös, a comprehensive application of this new model is also provided.  相似文献   

18.
Most of the fastest-growing string collections today are repetitive, that is, most of the constituent documents are similar to many others. As these collections keep growing, a key approach to handling them is to exploit their repetitiveness, which can reduce their space usage by orders of magnitude. We study the problem of indexing repetitive string collections in order to perform efficient document retrieval operations on them. Document retrieval problems are routinely solved by search engines on large natural language collections, but the techniques are less developed on generic string collections. The case of repetitive string collections is even less understood, and there are very few existing solutions. We develop two novel ideas, interleaved LCPs and precomputed document lists, that yield highly compressed indexes solving the problem of document listing (find all the documents where a string appears), top-k document retrieval (find the k documents where a string appears most often), and document counting (count the number of documents where a string appears). We also show that a classical data structure supporting the latter query becomes highly compressible on repetitive data. Finally, we show how the tools we developed can be combined to solve ranked conjunctive and disjunctive multi-term queries under the simple \({\textsf{tf}}{\textsf{-}}{\textsf{idf}}\) model of relevance. We thoroughly evaluate the resulting techniques in various real-life repetitiveness scenarios, and recommend the best choices for each case.  相似文献   

19.
In this paper, a VLSI architecture for the computation of the 2-D N × N-point Discrete Cosine Transform (DCT) is presented, where N is a power of 2. The proposed bit-serial architecture has highly regular structure and exhibits high data throughput rate. It is based on a high performance application specific multiplier. A chip was designed for the computation of the 4 × 4-point DCT exhibiting a performance of 246 Mpixels/sec.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号