首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   34311篇
  免费   399篇
  国内免费   15篇
教育   25074篇
科学研究   3195篇
各国文化   218篇
体育   2236篇
综合类   13篇
文化理论   559篇
信息传播   3430篇
  2022年   181篇
  2021年   261篇
  2020年   385篇
  2019年   608篇
  2018年   2909篇
  2017年   2813篇
  2016年   2250篇
  2015年   552篇
  2014年   774篇
  2013年   4509篇
  2012年   867篇
  2011年   1369篇
  2010年   1313篇
  2009年   900篇
  2008年   1171篇
  2007年   1627篇
  2006年   487篇
  2005年   794篇
  2004年   818篇
  2003年   718篇
  2002年   475篇
  2001年   479篇
  2000年   417篇
  1999年   360篇
  1998年   204篇
  1997年   225篇
  1996年   240篇
  1995年   236篇
  1994年   181篇
  1993年   201篇
  1992年   324篇
  1991年   304篇
  1990年   299篇
  1989年   300篇
  1988年   255篇
  1987年   301篇
  1986年   265篇
  1985年   304篇
  1984年   248篇
  1983年   211篇
  1982年   191篇
  1981年   163篇
  1980年   153篇
  1979年   258篇
  1978年   182篇
  1977年   168篇
  1976年   154篇
  1975年   141篇
  1974年   147篇
  1971年   140篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
891.
Pre analytical process of extraction for accurate detection of organic acids is a crucial step in diagnosis of organic acidemias by GCMS analysis. This process is accomplished either by solid phase extraction (SPE) or by liquid–liquid extraction (LLE). Both extraction procedures are used in different metabolic laboratories all over the world. In this study we compared these two extraction procedures in respect of precision, accuracy, percent recovery of metabolites, number of metabolites isolated, time and cost in a resource constraint setup. We observed that the mean recovery from SPE was 84.1 % and by LLE it was 77.4 % (p value <0.05). Moreover, the average number of metabolites isolated by SPE and LLE was 161.8 ± 18.6 and 140.1 ± 20.4 respectively. The processing cost of LLE was economical. In a cost constraint setting using LLE may be the practical option if used for organic acid analysis.  相似文献   
892.
Immunization with a proper dose of an antigenic stimulus leads to cell proliferation and antibody response of circulating lymphocytes. We have previously observed that Secondary immunized spleenocytes resist ceramide-mediated apoptosisin vitro. Our present study is aimed at investigating thein vivo effect of immunization on apoptosis. Mice were subjected to either Primary or Secondary dose with Tetanus Toxoid. Unimmunized spleenocytes served as controls. Unimmunized, Primary and Secondary immunized mice were later exposed to chemotherapeutic drugs such as Etoposide/Methotrexate/Vincristine to induce apoptosis. Apoptosis was studied by using the Feulgen reaction on 5μ thin parafin sections of spleen. It was observed that Secondary immunized mice showed a lower percentage of apoptosis as compared to Primary or Unimmunized mice that was subjected to either of the chemotherapeutic drugs. It was thus concluded that Secondary immunization inhibits the process of chemotherapeutic drug induced apoptosis in vivo.  相似文献   
893.
Neutralization theory and online software piracy: An empirical analysis   总被引:1,自引:0,他引:1  
Accompanying the explosive growth of information technology is the increasing frequency of antisocial and criminal behavior on the Internet. Online software piracy is one such behavior, and this study approaches the phenomenon through the theoretical framework of neutralization theory. The suitability and applicability of nine techniques of neutralization in determining the act is tested via logistic regression analyses on cross-sectional data collected from a sample of university students in the United States. Generally speaking, neutralization was found to be weakly related to experience with online software piracy; other elements which appear more salient are suggested and discussed in conclusion.  相似文献   
894.
Satisfying clinical information needs remains a major challenge in medicine, underscored by recent studies showing high medical error rates and suboptimal physician adherence to evidence-based practice guidelines. Advanced clinical decision support systems can improve practitioner performance and patient outcomes. Similarly, integrating online information resources into electronic health records (EHRs) shows great potential for positively impacting health care quality. This paper explores the evolution and current status of knowledge-based resource linkages within EHRs, including the benefits and drawbacks, as well as the important role librarians can play in this process.  相似文献   
895.
Text document clustering provides an effective and intuitive navigation mechanism to organize a large amount of retrieval results by grouping documents in a small number of meaningful classes. Many well-known methods of text clustering make use of a long list of words as vector space which is often unsatisfactory for a couple of reasons: first, it keeps the dimensionality of the data very high, and second, it ignores important relationships between terms like synonyms or antonyms. Our unsupervised method solves both problems by using ANNIE and WordNet lexical categories and WordNet ontology in order to create a well structured document vector space whose low dimensionality allows common clustering algorithms to perform well. For the clustering step we have chosen the bisecting k-means and the Multipole tree, a modified version of the Antipole tree data structure for, respectively, their accuracy and speed.
Diego Reforgiato RecuperoEmail:
  相似文献   
896.
Intelligent use of the many diverse forms of data available on the Internet requires new tools for managing and manipulating heterogeneous forms of information. This paper uses WHIRL, an extension of relational databases that can manipulate textual data using statistical similarity measures developed by the information retrieval community. We show that although WHIRL is designed for more general similarity-based reasoning tasks, it is competitive with mature systems designed explicitly for inductive classification. In particular, WHIRL is well suited for combining different sources of knowledge in the classification process. We show on a diverse set of tasks that the use of appropriate sets of unlabeled background knowledge often decreases error rates, particularly if the number of examples or the size of the strings in the training set is small. This is especially useful when labeling text is a labor-intensive job and when there is a large amount of information available about a particular problem on the World Wide Web.
Haym HirshEmail:
  相似文献   
897.
In distributed information retrieval systems, document overlaps occur frequently among different component databases. This paper presents an experimental investigation and evaluation of a group of result merging methods including the shadow document method and the multi-evidence method in the environment of overlapping databases. We assume, with the exception of resultant document lists (either with rankings or scores), no extra information about retrieval servers and text databases is available, which is the usual case for many applications on the Internet and the Web. The experimental results show that the shadow document method and the multi-evidence method are the two best methods when overlap is high, while Round-robin is the best for low overlap. The experiments also show that [0,1] linear normalization is a better option than linear regression normalization for result merging in a heterogeneous environment.
Sally McCleanEmail:
  相似文献   
898.
Census information of some form has been collected in Canada since the 1611 census of New France. Aboriginal people, identified or not, have been included in these enumerations. The collection of this information has had a profound impact on Aboriginal people and has been an element that has shaped their relationship with the dominant society. In response, Canadian Aboriginal people have often resisted and refused to co-operate with census takers and their masters. This article is an examination of this phenomenon focused on the censuses conducted in the post-Confederation period to the present. A census is made to collect information on populations and individuals that can then be used to configure and shape social and political relations between those being enumerated and the creators of the census. However, the human objects of the census are not just passive integers and they have resisted its creation in a number of ways, including being “missing” when the census is taken, refusing to answer the questions posed by enumerators or even driving them off Aboriginal territory. A census identifies elements of the social order and attempts to set them in their “proper” place and those who do not wish to be part of that order may refuse to take part. Archivists and historians must understand that the knowledge gained in a census is bound with the conditions of own creation. This has been noted by contemporary Aboriginal researchers who often state that the archival record of their people often distorts history and reflects the ideas and superficial observations of their Euro-Canadian creators. Changes to the Census of Canada since 1981, have increased the participation rate and therefore changed the nature of the record.
Brian Edward HubnerEmail:

Brian Edward Hubner   is currently Acquisition and Access Archivist at the University of Manitoba Archives & Special Collections. He was previously employed at the Archives of Manitoba, in Government Records; Queen’s University Archives, Kingston; and at the National Archives of Canada, Ottawa. He has a Master of Arts (History, in Archival Studies) from the University of Manitoba, and a Master of Arts (History), from the University of Saskatchewan. The 2nd edition of Brian’s co-authored book on the history of the Cypress Hills of Saskatchewan and Alberta is being published in 2007. He has published articles and delivered conference papers on Canadian Aboriginal peoples including “Horse Stealing and the Borderline: The N.W.M.P. and the Control of Indian Movement, 1874-1900.” His current research interest focuses on relationship between Canada’s Aboriginal Peoples and Canadian archives. Brian is married and has two children.  相似文献   
899.
This article describes the first half century of the Communist government’s supervision and management of the central-government archives of the last two dynasties. Immediately with the Communist ascent to power in 1949, the new government took great interest in assembling and protecting the country’s archival documents, readying the Ming-Qing archives for access to scholars, and preparing for publication of selected materials. By the 1980s Beijing’s Number One Historical Archives, in charge of the largest holding of Ming-Qing documents, had become the first Chinese authority to complete a full sorting and preliminary catalogues for such a collection. Moreover, to facilitate searches, an attempt has recently begun to create a subject-heading system for these and other holdings in the country. In the first half century’s final decades, foreign researchers were admitted for the first time and tours and international exchanges began to take place.
Beatrice S. BartlettEmail:
  相似文献   
900.
Previous papers on grey literature by the authors have described (1) the need for formal metadata to allow machine understanding and therefore scalable operations; (2) the enhancement of repositories of grey (and other) e-publications by linking with CRIS (Current Research Information Systems); (3) the use of the research process to collect metadata incrementally reducing the threshold barrier for end-users and improving quality in an ambient GRIDs environment. This paper takes the development one step further and proposes “intelligent” grey objects. The hypothesis is in 2 parts: (1) that the use of passive catalogs of metadata does not scale (a) in a highly distributed environment with millions of nodes and (b) with vastly increased volumes of R&D output grey publications with associated metadata; (2) that a new paradigm is required that (a) integrates grey with white literature and other R&D outputs such as software, data, products and patents (b) in a self-managing, self-optimizing way and that this paradigm manages automatically curation, provenance digital rights, trust, security and privacy. Concerning (1) existing repositories provide catalogs; harvesting takes increasing time ensuring non-currency. The end-user expends much manual effort/intelligence to utilize the results. The elapsed time of (1) the network (2) the centralized (or centrally controlled distributed) catalog server searches (3) end-user intervention becomes unacceptable. Concerning (2) there is no paradigm currently known to the authors that satisfies the requirement. Our proposal is outlined below. Hyperactive combines both hyperlinking and active properties of a (grey) object. Hyperlinking implies multimedia components linked to form the object and also external links to other resources. The term active implies that objects do not lie passively in a repository to be retrieved by end-users. They “get a life” and the object moves through the network knowing where it is going. A hyperactive grey object is wrapped by its (incrementally recorded) formal metadata and an associated (software) agent. It moves through process steps such as initial concept, authoring, reviewing and depositing in a repository. The workflow is based on the rules and information in the corporate data repository with which the agent interacts. Once the object is deposited, the agent associated with it actively pushes the object to the end-users (or systems) whose metadata indicate interest or an obligation in a workflowed process. The agents check the object and user (or system) metadata for rights, privacy, security parameters, and for any charges and assure compatibility. Alternatively the object can be found passively by end-user or system agents. The object can also associate itself with other objects forming relationships utilising metadata or content. Declared relationships include references and citations; workflowed relationships include versions and also links to corporate information and research datasets and software; inferenced relationships are discovered relationships such as between documents by different authors developed from an earlier idea of a third author. Components of this paradigm have been implemented to some extent. The challenge is implementing—respecting part two of the hypothesis—the integration architecture. This surely is harnessing the power of grey.
Anne AssersonEmail:
  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号