首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   21392篇
  免费   110篇
  国内免费   18篇
教育   15463篇
科学研究   2326篇
各国文化   92篇
体育   1356篇
综合类   6篇
文化理论   486篇
信息传播   1791篇
  2021年   73篇
  2020年   133篇
  2019年   189篇
  2018年   2435篇
  2017年   2379篇
  2016年   1828篇
  2015年   291篇
  2014年   334篇
  2013年   1824篇
  2012年   441篇
  2011年   967篇
  2010年   1052篇
  2009年   586篇
  2008年   839篇
  2007年   1370篇
  2006年   228篇
  2005年   540篇
  2004年   608篇
  2003年   543篇
  2002年   279篇
  2001年   164篇
  2000年   184篇
  1999年   158篇
  1998年   75篇
  1997年   89篇
  1996年   112篇
  1995年   81篇
  1994年   100篇
  1993年   90篇
  1992年   158篇
  1991年   153篇
  1990年   140篇
  1989年   137篇
  1988年   114篇
  1987年   119篇
  1986年   142篇
  1985年   137篇
  1984年   110篇
  1983年   131篇
  1982年   113篇
  1981年   105篇
  1980年   93篇
  1979年   166篇
  1978年   113篇
  1977年   110篇
  1976年   108篇
  1975年   81篇
  1974年   89篇
  1972年   71篇
  1971年   70篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
101.
ABSTRACT

Latent Dirichlet allocation (LDA) topic models are increasingly being used in communication research. Yet, questions regarding reliability and validity of the approach have received little attention thus far. In applying LDA to textual data, researchers need to tackle at least four major challenges that affect these criteria: (a) appropriate pre-processing of the text collection; (b) adequate selection of model parameters, including the number of topics to be generated; (c) evaluation of the model’s reliability; and (d) the process of validly interpreting the resulting topics. We review the research literature dealing with these questions and propose a methodology that approaches these challenges. Our overall goal is to make LDA topic modeling more accessible to communication researchers and to ensure compliance with disciplinary standards. Consequently, we develop a brief hands-on user guide for applying LDA topic modeling. We demonstrate the value of our approach with empirical data from an ongoing research project.  相似文献   
102.
ABSTRACT

Employing a number of different standalone programs is a prevalent approach among communication scholars who use computational methods to analyze media content. For instance, a researcher might use a specific program or a paid service to scrape some content from the Web, then use another program to process the resulting data, and finally conduct statistical analysis or produce some visualizations in yet another program. This makes it hard to build reproducible workflows, and even harder to build on the work of earlier studies. To improve this situation, we propose and discuss four criteria that a framework for automated content analysis should fulfill: scalability, free and open source, adaptability, and accessibility via multiple interfaces. We also describe how to put these considerations into practice, discuss their feasibility, and point toward future developments.  相似文献   
103.
Placing Facebook     
Facebook is challenging professional journalism. These challenges were evident in three incidents from 2016: the allegation that Facebook privileged progressive-leaning news on its trending feature; Facebook’s removal of the Pulitzer Prize-winning “Napalm Girl” photo from the pages of prominent users; and the proliferation of “fake news” during the US presidential election. Using theoretical concepts from the field of boundary work, this paper examines how The Guardian, The New York Times, Columbia Journalism Review and Poynter editorialized Facebook’s role in these three incidents to discursively construct the boundary between the value of professional journalism to democracy and Facebook’s ascendant role in facilitating essential democratic functions. Findings reveal that these publications attempted to define Facebook as a news organization (i.e., include it within the boundaries of journalism) so that they could then criticize the company for not following duties traditionally incumbent upon news organizations (i.e., place it outside the boundaries of journalism). This paper advances scholarship that focuses on both inward and outward conceptions of boundary work, further explores the complex challenge of defining who a journalist is in the face of rapidly changing technological norms, and advances scholarship in the field of media ethics that positions ethical analysis at the institutional level.  相似文献   
104.
105.
BackgroundEsports players, like traditional athletes, practice for long hours and, thus, are vulnerable to the negative health effects of prolonged sitting. There is a lack of research on the physical activity and the health ramifications of prolonged sitting by competitive players. The purpose of this study was to investigate activity levels, body mass index (BMI), and body composition in collegiate esports players as compared to age-matched controls.MethodsTwenty-four male collegiate esports players and non-esports players between 18 and 25 years of age signed a written consent to participate. Physical activity was examined using daily activity (step count) with a wrist-worn activity tracker. A questionnaire assessing physical activity was also administered. Secondary outcomes included body-fat percentage, lean-body mass, BMI, and bone mineral content measured using dual X-ray absorptiometry.ResultsThe step count in the esports players was significantly lower than the age-matched controls (6040.2 ± 3028.6 vs. 12843.8 ± 5661.1; p = 0.004). Esports players exhibited greater body-fat percentage (p = 0.05), less lean body mass (p = 0.003), and less bone mineral content (p = 0.03), despite no difference in BMI between the esports and non-esports players.ConclusionAs compared to non-esports players, collegiate esports players were significantly less active and had a higher body-fat percentage, with lower lean body mass and bone mineral content. The BMIs showed no difference between the 2 groups. Esports athletes displayed significantly less activity and poor body composition, which are all correlated with potential health issues and risk of injury. BMI did not capture this difference and should not be considered as an accurate measure of health in competitive esports players.  相似文献   
106.
107.
The outbreak of the COVID-19 pandemic was partially due to the challenge of identifying asymptomatic and presymptomatic carriers of the virus, and thus highlights a strong motivation for diagnostics with high sensitivity that can be rapidly deployed. On the other hand, several concerning SARS-CoV-2 variants, including Omicron, are required to be identified as soon as the samples are identified as ‘positive’. Unfortunately, a traditional PCR test does not allow their specific identification. Herein, for the first time, we have developed MOPCS (Methodologies of Photonic CRISPR Sensing), which combines an optical sensing technology-surface plasmon resonance (SPR) with the ‘gene scissors’ clustered regularly interspaced short palindromic repeat (CRISPR) technique to achieve both high sensitivity and specificity when it comes to measurement of viral variants. MOPCS is a low-cost, CRISPR/Cas12a-system-empowered SPR gene-detecting platform that can analyze viral RNA, without the need for amplification, within 38 min from sample input to results output, and achieve a limit of detection of 15 fM. MOPCS achieves a highly sensitive analysis of SARS-CoV-2, and mutations appear in variants B.1.617.2 (Delta), B.1.1.529 (Omicron) and BA.1 (a subtype of Omicron). This platform was also used to analyze some recently collected patient samples from a local outbreak in China, identified by the Centers for Disease Control and Prevention. This innovative CRISPR-empowered SPR platform will further contribute to the fast, sensitive and accurate detection of target nucleic acid sequences with single-base mutations.  相似文献   
108.
109.
110.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号