全文获取类型
收费全文 | 12203篇 |
免费 | 147篇 |
国内免费 | 19篇 |
专业分类
教育 | 8420篇 |
科学研究 | 1229篇 |
各国文化 | 118篇 |
体育 | 1322篇 |
综合类 | 8篇 |
文化理论 | 164篇 |
信息传播 | 1108篇 |
出版年
2022年 | 107篇 |
2021年 | 153篇 |
2020年 | 240篇 |
2019年 | 396篇 |
2018年 | 470篇 |
2017年 | 487篇 |
2016年 | 402篇 |
2015年 | 291篇 |
2014年 | 336篇 |
2013年 | 2341篇 |
2012年 | 300篇 |
2011年 | 279篇 |
2010年 | 239篇 |
2009年 | 203篇 |
2008年 | 217篇 |
2007年 | 246篇 |
2006年 | 208篇 |
2005年 | 190篇 |
2004年 | 175篇 |
2003年 | 200篇 |
2002年 | 166篇 |
2001年 | 176篇 |
2000年 | 173篇 |
1999年 | 174篇 |
1998年 | 87篇 |
1997年 | 88篇 |
1996年 | 127篇 |
1995年 | 91篇 |
1994年 | 105篇 |
1993年 | 101篇 |
1992年 | 166篇 |
1991年 | 161篇 |
1990年 | 151篇 |
1989年 | 141篇 |
1988年 | 123篇 |
1987年 | 127篇 |
1986年 | 145篇 |
1985年 | 142篇 |
1984年 | 114篇 |
1983年 | 134篇 |
1982年 | 114篇 |
1981年 | 109篇 |
1980年 | 97篇 |
1979年 | 173篇 |
1978年 | 117篇 |
1977年 | 116篇 |
1976年 | 111篇 |
1975年 | 83篇 |
1974年 | 90篇 |
1971年 | 73篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
61.
62.
传媒经济学研究的历史、方法与范例 总被引:3,自引:0,他引:3
本文主要回顾了传媒经济学在西方的发展历程、研究方法以及不同时期的研究议题。传媒经济学是构建于不同的经济学理论和分析方法之上的应用性学科,它致力于研究经济和金融力量如何影响传媒体系和传媒组织。西方传媒经济学创建于20世纪50年代,至今已发展成为一个活跃的和跨学科研究领域。文章认为,西方传媒经济学的主要研究范例包括理论型、应用型和批评型范例;其研究方法可分为行业市场研究、公司研究和影响力研究。在介绍西方传媒经济学研究的同时,本文还简要回顾了传媒经济学在中国的兴起与发展。 相似文献
63.
Tatjana Bayerová 《文物保护研究》2018,63(3):171-188
The technical study of wall paintings from the Buddhist temple complex at Nako, Western Himalayas, was one of the basic preconditions required for designing an appropriate conservation strategy. The complex, composed of four temples from the eleventh–twelfth century, offered a unique possibility to carry out a comprehensive research of technology and painting materials used in early and later western Tibetan Buddhist wall paintings as well as a comparative assessment with murals from other sites in the Western Himalayas. The study was based on extensive fieldwork and an integrated analytical approach comprising a wide range of non-destructive and micro-destructive methods. Answering the question of the coevality of paintings in the smaller temples with other original murals, the precise characterisation of binding media, the detection of the yellow dye gamboge and natural minerals posnjakite and brochantite identified for the first time in Himalayan murals, the clarification of technology of metal decoration, and the making of raised elements are some of the most exciting results which emerged from the research. 相似文献
64.
ABSTRACTLighting a cultural heritage artifact requires balancing visual perception with preventive conservation, by providing the best lighting (in terms of spectral distribution and quantity) to enable the viewer to appreciate details and color, while limiting photo-induced degradation. The paper outlines the methodology applied by a multi-disciplinary team while lighting the Shroud of Turin at its last public exhibition in 2015. The methodology considered the special requirements of the Shroud, including exposure to ultraviolet light, while providing appropriate display conditions that would meet audience expectations. The desired appearance (readability of the body image and color) was defined with the help of Shroud researchers and confirmed by subjective tests, while appropriate light levels for preservation were set in agreement with standard requirements and using knowledge of the degradation of linen in visible and UV light. The installation provided a controlled environment and a managed visitor route to the Shroud, assuring excellent perception of both details and color, with the lowest illuminance level about 15?lx. 相似文献
65.
Debasis?GangulyEmail authorView authors OrcID profile Gareth?J.?F.?Jones Aarón?Ramírez-de-la-Cruz Gabriela?Ramírez-de-la-Rosa Esaú?Villatoro-Tello 《Information Retrieval》2018,21(1):1-23
Automatic detection of source code plagiarism is an important research field for both the commercial software industry and within the research community. Existing methods of plagiarism detection primarily involve exhaustive pairwise document comparison, which does not scale well for large software collections. To achieve scalability, we approach the problem from an information retrieval (IR) perspective. We retrieve a ranked list of candidate documents in response to a pseudo-query representation constructed from each source code document in the collection. The challenge in source code document retrieval is that the standard bag-of-words (BoW) representation model for such documents is likely to result in many false positives being retrieved, because of the use of identical programming language specific constructs and keywords. To address this problem, we make use of an abstract syntax tree (AST) representation of the source code documents. While the IR approach is efficient, it is essentially unsupervised in nature. To further improve its effectiveness, we apply a supervised classifier (pre-trained with features extracted from sample plagiarized source code pairs) on the top ranked retrieved documents. We report experiments on the SOCO-2014 dataset comprising 12K Java source files with almost 1M lines of code. Our experiments confirm that the AST based approach produces significantly better retrieval effectiveness than a standard BoW representation, i.e., the AST based approach is able to identify a higher number of plagiarized source code documents at top ranks in response to a query source code document. The supervised classifier, trained on features extracted from sample plagiarized source code pairs, is shown to effectively filter and thus further improve the ranked list of retrieved candidate plagiarized documents. 相似文献
66.
67.
Alberto González 《Communication Studies》2018,69(4):362-365
This reflection essay describes the Central States Region as an area rich in intercultural communication. González describes Mexican American migrant farmworker organizing as an intercultural activity since the union activists attempted to influence both Mexican heritage and European heritage audiences. González also describes the many interculturalists working in the Midwest who influenced his early research. 相似文献
68.
Jiawei Huang Mahda M. Bagher Heather Dohn Ross Nathan Piekielek Jan Oliver Wallgrün Jiayan Zhao 《Journal of Map & Geography Libraries》2018,14(1):40-63
Libraries have been the key to preserving culture and historic legacy for centuries. One such treasure cataloged in The Pennsylvania State University (Penn State) Libraries is a collection of over 33,000 Sanborn? Fire Insurance Maps. Originally kept safe in metal drawers, the library has embarked on a journey to digitize this abundance of information, combine it with other media such as photographs, and make it accessible through a web interface. Inspired by these efforts, we accessed this information and took it to the next level. Using state of the art 3D modeling and immersive technologies, we created a historic 3D model and immersive experiences of Penn State, exemplarily for the 1922 campus. The resulting experiences can be accessed through the web but also through head mounted displays (HMDs) and mobile phones in combination with VR viewers such as the Google Cardboard. Additionally, they can be used anywhere in the world or on the campus itself as a way to enable remote and in situ experiences and learning. Immersive experiences let us connect to the past, the present and the future, and as such offer value to digital cultural heritage efforts. 相似文献
69.
Daniel Maier A. Waldherr P. Miltner G. Wiedemann A. Niekler A. Keinert 《Communication methods and measures》2018,12(2-3):93-118
ABSTRACTLatent Dirichlet allocation (LDA) topic models are increasingly being used in communication research. Yet, questions regarding reliability and validity of the approach have received little attention thus far. In applying LDA to textual data, researchers need to tackle at least four major challenges that affect these criteria: (a) appropriate pre-processing of the text collection; (b) adequate selection of model parameters, including the number of topics to be generated; (c) evaluation of the model’s reliability; and (d) the process of validly interpreting the resulting topics. We review the research literature dealing with these questions and propose a methodology that approaches these challenges. Our overall goal is to make LDA topic modeling more accessible to communication researchers and to ensure compliance with disciplinary standards. Consequently, we develop a brief hands-on user guide for applying LDA topic modeling. We demonstrate the value of our approach with empirical data from an ongoing research project. 相似文献
70.
ABSTRACTEmploying a number of different standalone programs is a prevalent approach among communication scholars who use computational methods to analyze media content. For instance, a researcher might use a specific program or a paid service to scrape some content from the Web, then use another program to process the resulting data, and finally conduct statistical analysis or produce some visualizations in yet another program. This makes it hard to build reproducible workflows, and even harder to build on the work of earlier studies. To improve this situation, we propose and discuss four criteria that a framework for automated content analysis should fulfill: scalability, free and open source, adaptability, and accessibility via multiple interfaces. We also describe how to put these considerations into practice, discuss their feasibility, and point toward future developments. 相似文献