全文获取类型
收费全文 | 11464篇 |
免费 | 0篇 |
专业分类
教育 | 8863篇 |
科学研究 | 1208篇 |
体育 | 339篇 |
文化理论 | 382篇 |
信息传播 | 672篇 |
出版年
2019年 | 2篇 |
2018年 | 2170篇 |
2017年 | 2075篇 |
2016年 | 1563篇 |
2015年 | 105篇 |
2014年 | 112篇 |
2013年 | 70篇 |
2012年 | 214篇 |
2011年 | 687篇 |
2010年 | 830篇 |
2009年 | 427篇 |
2008年 | 636篇 |
2007年 | 1144篇 |
2006年 | 66篇 |
2005年 | 393篇 |
2004年 | 443篇 |
2003年 | 355篇 |
2002年 | 127篇 |
2001年 | 4篇 |
2000年 | 19篇 |
1998年 | 1篇 |
1997年 | 14篇 |
1991年 | 6篇 |
1976年 | 1篇 |
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
121.
Urinary citric acid and calcium levels have been estimated in the urine of 20 normal healthy persons as well as 12 urinary
stone patients. Inhibition efficiency of these urine samples towards the mineralisation of urinary stone forming minerals,
viz., calcium phosphate, oxalate or carbonate, has been studied in an experimental model. Statistical correlation of the above
data has been made by computing the coefficient of determination and unexplained variance. Clinico-biochemical indexing of
calcium urolithiasis risk factor has been attempted in the light of the data. 相似文献
122.
123.
124.
K. A. Faseehuddin Shakir Basavaraj Madhusudhan 《Indian journal of clinical biochemistry : IJCB》2007,22(1):117-121
Rats fed with hypercholesterolemic diet showed a significant increase in serum total—cholesterol, liver homogenate total-cholesterol,
HDL-cholesterol and changed LDL-cholesterol, and HDL/LDL ratio in comparison to control. Flaxseedchutney (FC) supplemented diet (15%, w/w) was found to be more effective in restoring lipid profile changes in rats fed with cholesterol,
(1.0%). The activities of serum marker enzymes glutamate oxaloacetate transminase (GOT), glutamate pyruvate transaminase (GPT)
and alkaline phosphatase (ALP) were elevated significantly in carbon tetrachloride induced rats. Administration of flaxseedchutney (15%, w/w) resulted in depletion of serum marker enzymes and exhibited recoupment thus showing significant hepatoprotective
effect. It was observed that flaxseedchutney supplemented diet could lower the serum cholesterol and as a potential source of antioxidants it could exert protection against
hepatotoxic damage induced by carbon tetrachloride (CCl4) in rats. 相似文献
125.
Lakshmi Lavanya Reddy Swarup A. V. Shah Alpa J. Dherai Chandrashekhar K. Ponde Tester F. Ashavaid 《Indian journal of clinical biochemistry : IJCB》2016,31(1):87-92
Acute coronary syndrome (ACS) is a term for a range of clinical signs and symptoms suggestive of myocardial ischemia. It results in functional and structural changes and ultimately releasing protein from injured cardiomyocytes. These cardiac markers play a major role in diagnosis and prognosis of ACS. This study aims to assess the efficacy of heart type fatty acid binding protein (h-FABP) as a marker for ACS along with the routinely used hs-TropT. In our observational study, plasma h-FABP (cut-off 6.32 ng/ml) and routinely done hs-Trop T (cutoff 0.1 and 0.014 ng/ml) were estimated by immunometric laboratory assays in 88 patients with acute chest pain. Based on the clinical and laboratory test findings the patients were grouped into ACS (n = 41) and non-ACS (n = 47). The diagnostic sensitivity, specificity, NPV, PPV and ROC curve at 95 % CI were determined. Sensitivity of hs-TropT (0.1 ng/ml), hs-TropT (0.014 ng/ml) and h-FABP were 53, 86 and 78 % respectively and specificity for the same were 98, 73 and 70 % respectively. Sensitivity, specificity and NPV calculated for a cut-off combination of hs-TropT 0.014 ng/ml and h-FABP was 100, 51 and 100 % respectively. These results were substantiated by ROC analysis. Measurement of plasma h-FABP and hs-TropT together on admission appears to be more precise predictor of ACS rather than either hs-Trop T or h-FABP. 相似文献
126.
Chandrawati Kumari Bijo Varughese Siddarth Ramji Seema Kapoor 《Indian journal of clinical biochemistry : IJCB》2016,31(4):414-422
Pre analytical process of extraction for accurate detection of organic acids is a crucial step in diagnosis of organic acidemias by GCMS analysis. This process is accomplished either by solid phase extraction (SPE) or by liquid–liquid extraction (LLE). Both extraction procedures are used in different metabolic laboratories all over the world. In this study we compared these two extraction procedures in respect of precision, accuracy, percent recovery of metabolites, number of metabolites isolated, time and cost in a resource constraint setup. We observed that the mean recovery from SPE was 84.1 % and by LLE it was 77.4 % (p value <0.05). Moreover, the average number of metabolites isolated by SPE and LLE was 161.8 ± 18.6 and 140.1 ± 20.4 respectively. The processing cost of LLE was economical. In a cost constraint setting using LLE may be the practical option if used for organic acid analysis. 相似文献
127.
Sameer Hinduja 《Ethics and Information Technology》2007,9(3):187-204
Accompanying the explosive growth of information technology is the increasing frequency of antisocial and criminal behavior
on the Internet. Online software piracy is one such behavior, and this study approaches the phenomenon through the theoretical
framework of neutralization theory. The suitability and applicability of nine techniques of neutralization in determining
the act is tested via logistic regression analyses on cross-sectional data collected from a sample of university students
in the United States. Generally speaking, neutralization was found to be weakly related to experience with online software
piracy; other elements which appear more salient are suggested and discussed in conclusion. 相似文献
128.
Diego Reforgiato Recupero 《Information Retrieval》2007,10(6):563-579
Text document clustering provides an effective and intuitive navigation mechanism to organize a large amount of retrieval
results by grouping documents in a small number of meaningful classes. Many well-known methods of text clustering make use
of a long list of words as vector space which is often unsatisfactory for a couple of reasons: first, it keeps the dimensionality
of the data very high, and second, it ignores important relationships between terms like synonyms or antonyms. Our unsupervised
method solves both problems by using ANNIE and WordNet lexical categories and WordNet ontology in order to create a well structured
document vector space whose low dimensionality allows common clustering algorithms to perform well. For the clustering step
we have chosen the bisecting k-means and the Multipole tree, a modified version of the Antipole tree data structure for, respectively, their accuracy and
speed.
相似文献
Diego Reforgiato RecuperoEmail: |
129.
Intelligent use of the many diverse forms of data available on the Internet requires new tools for managing and manipulating
heterogeneous forms of information. This paper uses WHIRL, an extension of relational databases that can manipulate textual
data using statistical similarity measures developed by the information retrieval community. We show that although WHIRL is
designed for more general similarity-based reasoning tasks, it is competitive with mature systems designed explicitly for
inductive classification. In particular, WHIRL is well suited for combining different sources of knowledge in the classification
process. We show on a diverse set of tasks that the use of appropriate sets of unlabeled background knowledge often decreases
error rates, particularly if the number of examples or the size of the strings in the training set is small. This is especially
useful when labeling text is a labor-intensive job and when there is a large amount of information available about a particular
problem on the World Wide Web.
相似文献
Haym HirshEmail: |
130.
Previous papers on grey literature by the authors have described (1) the need for formal metadata to allow machine understanding
and therefore scalable operations; (2) the enhancement of repositories of grey (and other) e-publications by linking with
CRIS (Current Research Information Systems); (3) the use of the research process to collect metadata incrementally reducing
the threshold barrier for end-users and improving quality in an ambient GRIDs environment. This paper takes the development
one step further and proposes “intelligent” grey objects. The hypothesis is in 2 parts: (1) that the use of passive catalogs
of metadata does not scale (a) in a highly distributed environment with millions of nodes and (b) with vastly increased volumes
of R&D output grey publications with associated metadata; (2) that a new paradigm is required that (a) integrates grey with
white literature and other R&D outputs such as software, data, products and patents (b) in a self-managing, self-optimizing
way and that this paradigm manages automatically curation, provenance digital rights, trust, security and privacy. Concerning
(1) existing repositories provide catalogs; harvesting takes increasing time ensuring non-currency. The end-user expends much
manual effort/intelligence to utilize the results. The elapsed time of (1) the network (2) the centralized (or centrally controlled
distributed) catalog server searches (3) end-user intervention becomes unacceptable. Concerning (2) there is no paradigm currently
known to the authors that satisfies the requirement. Our proposal is outlined below. Hyperactive combines both hyperlinking
and active properties of a (grey) object. Hyperlinking implies multimedia components linked to form the object and also external
links to other resources. The term active implies that objects do not lie passively in a repository to be retrieved by end-users.
They “get a life” and the object moves through the network knowing where it is going. A hyperactive grey object is wrapped
by its (incrementally recorded) formal metadata and an associated (software) agent. It moves through process steps such as
initial concept, authoring, reviewing and depositing in a repository. The workflow is based on the rules and information in
the corporate data repository with which the agent interacts. Once the object is deposited, the agent associated with it actively
pushes the object to the end-users (or systems) whose metadata indicate interest or an obligation in a workflowed process.
The agents check the object and user (or system) metadata for rights, privacy, security parameters, and for any charges and
assure compatibility. Alternatively the object can be found passively by end-user or system agents. The object can also associate
itself with other objects forming relationships utilising metadata or content. Declared relationships include references and
citations; workflowed relationships include versions and also links to corporate information and research datasets and software;
inferenced relationships are discovered relationships such as between documents by different authors developed from an earlier
idea of a third author. Components of this paradigm have been implemented to some extent. The challenge is implementing—respecting
part two of the hypothesis—the integration architecture. This surely is harnessing the power of grey.
相似文献
Anne AssersonEmail: |