首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 921 毫秒
1.
目前合作存储图书馆是在文献快速增长以及图书馆空间日趋紧张环境下,针对低利用率文献采取的存储方案.来源于利益相关者理论的共同治理是美国大学普遍运用的联盟治理机制.五学院合作存储图书馆是美国建立较早的存储图书馆之一,多主体的分布式决策、治理机制运行过程中总裁的核心作用和信任机制与合作存储联盟绩效的高相关度是五学院合作存储图书馆治理机制特征.基于合作存储图书馆结构特征,共同治理机制应是未来我国合作存储图书馆治理机制的选择.  相似文献   

2.
基于关联数据的分布式信息查询研究   总被引:1,自引:0,他引:1  
随着越来越多的数据以关联数据的形式发布,对关联数据的利用问题成为关注的焦点,而基于关联数据的分布式信息查询是关联数据利用的重要方面之一。文章介绍了关联数据的内涵及关键技术,将基于关联数据的分布式信息查询的主要实现机制划分为基于URI解析的机制和基于SPARQ终端的机制。最后提出在实现关联数据的分布式信息查询中面临的主要问题:标识符的唯一性、资源发现、数据融合和模式映射。  相似文献   

3.
在ASP.NET构建网站的过程中,有些页面和公共部分访问频率很高,如果每次都重新生成会严重影响效率。本文研究的是利用ASP.NET中提供的页面级输出缓存、片段缓存和应用程序缓存这三种缓存技术来提高页面访问效率的方法。  相似文献   

4.
黄婷 《图书情报工作》2014,58(22):125-129
以P2P网络为基础,提出一种基于缓存的分布式语义检索模型.当网络中某一节点的剩余负载能力达到一个阈值时,它将成为中心节点,并与周围的普通节点共同构成一个拓扑域.中心节点收集域中各普通节点上的资源信息列表建立索引缓存,并将索引缓存分发到域中各普通节点.同时,中心节点会为域中的热点资源建立内容缓存,存储在本地.域中每个节点都存有检索资源的路径信息,中心节点存储着热点资源,利用这些已有信息辅助检索,可以有效提高检索的命中率,降低资源消耗.  相似文献   

5.
数字图书馆服务管理系统的设计和实现   总被引:4,自引:0,他引:4  
针对分布式的松散耦合的多系统构成的数字图书馆服务体系,提出基于应用层的数字图书馆网络服务监测方案,实现可访问即可监测的服务管理目标,并给出服务性能预警和可用性的计算方法,设计并实现一个数字图书馆服务管理系统。  相似文献   

6.
分布式异构的系统间存在大量的数据交换,数据交换测试是分布式异构系统测试中的重要内容,同时也是软件测试比较重要的研究方向之一.然而,现有的基于数据交换的测试理论、方法和工具无法满足目前的测试质量和效率要求.通过深入地研究数据交换的通用模型、过程以及数据交换测试需求,提出了一种基于数据交换的测试模型的软件自动化测试框架.经过测试实践表明,该框架能够应用于数据交换测试的不同场景,并能提供测试质量和效率.  相似文献   

7.
基于内容的图像检索在检索效率和检索性能等方面一直存在着限制与不足。为提高图像检索效率,对基于内容的图像检索和分布式计算进行研究,提出一种基于图像颜色模型向直方图转换的分布式检索方法DHCIR(Distributed Image Retrieval method based on Color Model to Histogram Conversion),并基于该方法进行系统设计及实现。通过实际测试,该算法能够提供稳定、快速、高效的图像检索服务,提高图像检索的计算效率与准确性。  相似文献   

8.
以效率为导向的网络信息资源配置的政策法规机制研究   总被引:3,自引:0,他引:3  
基于我国目前网络信息资源配置机制的研究现状,文章重点探讨了政策法规机制的运用,并明确提出了效率导向。论文还详细阐述了政策法规效率及其影响因素,以效率为导向的政策法规机制的意义,以及提高网络信息资源配置的政策法规机制效率的实施途径。  相似文献   

9.
分布式处理技术是在网络基础上发展起来的一种新型计算机处理技术,其可组合多个系统实施计算任务,相比于单个系统,其处理任务的速度及效率明显更高。正是基于这些优势,现分布式处理技术已广泛应用于各行业及领域当中,促进了各行业及领域的快速发展。将分布式处理技术应用于广播电台播控传输网当中,不但优化了播控传输网的设计结构,而且也提高了播控传输网的运行效率,促进了广播电台科学技术水平的提高。文章简单介绍了分布式处理技术,并就分布式处理技术在广播电台播控传输网中的应用进行了具体分析,指明分布式处理技术的应用确实提高了广播电台播控传输网系统的质量及工作效率,期以为广播电台播控传输网的建设及优化提供若干建议,以保证广播电台的安全、稳定运行。  相似文献   

10.
许桂芳  林科 《兰台世界》2016,(17):16-19
数据的存储和检索是大数据时代有效构建系统应用的关键技术问题。针对高校教育信息系统数据量大、存储异构、查找困难等问题,提出一种基于分类索引以及热度积累的云存储及实时搜索引擎设计方案。该方法首先建立持久性硬盘分类索引。然后,通过一种索引热点数据缓存技术和一种高效的热度累积缓存替换策略,利用内存提升查询性能,降低对硬盘索引表的磁盘访问开销。  相似文献   

11.
A pragmatic way is shown to calculate database cache sizes for a given desired level of cache performance, without use of application information. Treating applications as black boxes allows focusing on higher order structures of database accesses to classify applications. Online tracing of all database accesses allows a near-line simulation of cache performance for various cache sizes. These simulations result in typical patterns for application groups which point to areas with strong gradients in the relation between cache size and cache efficiency which can be exploited. The near-line-ness of this calculation lends itself to automatic adaptive cache size regulation.  相似文献   

12.
Modern information retrieval systems use several levels of caching to speedup computation by exploiting frequent, recent or costly data used in the past. Previous studies show that the use of caching techniques is crucial in search engines, as it helps reducing query response times and processing workloads on search servers. In this work we propose and evaluate a static cache that acts simultaneously as list and intersection cache, offering a more efficient way of handling cache space. We also use a query resolution strategy that takes advantage of the existence of this cache to reorder the query execution sequence. In addition, we propose effective strategies to select the term pairs that should populate the cache. We also represent the data in cache in both raw and compressed forms and evaluate the differences between them using different configurations of cache sizes. The results show that the proposed Integrated Cache outperforms the standard posting lists cache in most of the cases, taking advantage not only of the intersection cache but also the query resolution strategy.  相似文献   

13.
In this paper we firstly discuss the consequences of cache memory defects/faults in the operation of the system and we show that cache tag defects/faults compared to cache data defects/faults may cause significantly more serious consequences on the integrity and performance of the system. A possible solution is the use of a single error correcting-double error detecting (SEC/DED) code in the cache tag memory. However, the classical implementation of the SEC/DED code is proved to be inappropriate for the tag memory due to the required silicon area and time delays. In this paper we propose a new way of the SEC/DED code exploitation well-suited to cache tag memories. During fault free operation the proposed technique does not add any delay on the critical path of the cache, while in the case of a single error the delay is so small that the cache access time is increased by at most one CPU cycle. An example design shows the superiority of the proposed technique against the classical one. The application of the proposed scheme to real and virtual addressed caches of one or two levels is also discussed.  相似文献   

14.
Due to the rapid development of the VLSI technology, it is feasible to place the cache memory on the processor chip. This paper proposes an associative sequencer containing an on-chip cache memory. The sequencer emulates operations by streams of instructions and is hence called ‘associative stream emulator’. The associative cache memory stores emulation streams for operations belonging to several lexical levels. This includes also microinstruction streams. The individual instructions in the streams are frequency coded. This leads to code compaction, which increases the effective instruction transfer rate on the memory-processor bus enhancing by that the performance. A possible emulator architecture and its instruction stream processing principles are described. The structure of the associative cache cell is outlined. The operations on the cache memory are explained. Statistical measurement and their results are shown. These results prove that in order to obtain a reasonable performance it is enough with 50–150 cells in the cache memory.  相似文献   

15.
ISA Server在图书馆数字化方面的应用--作为代理服务器   总被引:1,自引:0,他引:1  
ISA Server是企业级的应用服务器 ,是安全与速度的结合。它在安全快速的 Web访问 ,统一灵活的管理 ,开放性和扩展性方等各面都能满足用户的应用需求。  相似文献   

16.
We propose a new scheme for the replacement of cache lines in high performance computer systems. Preliminary research, to date, indicates that neural networks (NNs) have great potential in the area of statistical predictions [1]. This attribute of neural networks is used in our work to develop a neural network-based replacement policy which can effectively eliminate dead lines from the cache memory by predicting the sequence of memory addresses referenced by the central processing unit (CPU) of a computer system. The proposed strategy may, therefore, provide better cache performance as compared to the conventional schemes, such as: LRU (Least Recently Used), FIFO (First In First Out), and MRU (Most Recently Used) algorithms. In fact, we observed from the simulation experiments that a carefully designed neural network-based replacement scheme does provide excellent performance as compared to the LRU scheme. The new approach can be applied to the page replacement and prefetching algorithms in virtual memory systems.  相似文献   

17.
This paper presents a simulation methodology for evaluating the performance of CISC computers. The method is called Message Flow Technique (MFT). MFT has several advantages over Instruction Flow Technique (IFT) we presented in [1]. The proposed methodology is applied to a single and two-level cache CISC system using 80486 SX as a case study. It was found that with a single-level on-chip cache of size 8KB, the performance of the system is considerably limited by the service time of BIU (Bus Interface Unit). The average service time of BIU, per instruction, was found to be around 1.0135 microseconds for our Modified Gibson Mix (MGM). With a second-level external cache of sizes 16KB, 32KB, 64KB, and 128KB the average performance improvements were found to be 1.4%, 18.6%, 39% and 53% respectively. The methodology presented here is an efficient and easy to use tool that could help performance analysts in evaluating computer systems.  相似文献   

18.
伴随物联网和互联网技术的发展与应用,数字化档案信息资源呈现爆炸式增长,传统的关系型数据库在海量档案信息资源数据读写效率中存在着诸多的性能瓶颈,如何满足不断增长的档案信息资源数据存储需求成为共享平台建设研究的重点。在对档案信息资源共享平台数据存储需求分析的基础上,对档案信息资源存储系统进行了模块设计,提出了采用HDFS结合Redis作为共享平台数据存储系统的总体架构的方法,并对档案信息资源存储的性能进行测试对比。采用HDFS作为共享平台的存储系统架构支撑,结合Redis缓存技术,能够快速提升档案信息资源的存储处理性能,为档案信息资源共享平台的数据服务提供可靠的基础保障。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号