首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   22077篇
  免费   299篇
  国内免费   59篇
教育   16337篇
科学研究   1514篇
各国文化   286篇
体育   1545篇
综合类   47篇
文化理论   139篇
信息传播   2567篇
  2021年   218篇
  2020年   284篇
  2019年   459篇
  2018年   601篇
  2017年   683篇
  2016年   618篇
  2015年   417篇
  2014年   562篇
  2013年   4654篇
  2012年   554篇
  2011年   563篇
  2010年   505篇
  2009年   510篇
  2008年   543篇
  2007年   501篇
  2006年   509篇
  2005年   467篇
  2004年   360篇
  2003年   345篇
  2002年   320篇
  2001年   425篇
  2000年   357篇
  1999年   305篇
  1998年   215篇
  1997年   218篇
  1996年   236篇
  1995年   206篇
  1994年   216篇
  1993年   204篇
  1992年   272篇
  1991年   278篇
  1990年   307篇
  1989年   287篇
  1988年   232篇
  1987年   238篇
  1986年   253篇
  1985年   228篇
  1984年   237篇
  1983年   238篇
  1982年   191篇
  1981年   195篇
  1980年   194篇
  1979年   243篇
  1978年   230篇
  1977年   163篇
  1976年   166篇
  1975年   153篇
  1974年   135篇
  1973年   144篇
  1971年   147篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
992.
993.
994.
995.
996.
Background: Demonstration is a widely used method in sports teaching and coaching, based on the assumption that it is more beneficial than verbal instructions or trial-and-error methods for skill acquisition. Although in teaching/coaching situations, the demonstration is usually carried out in front of the learners, in a research context, it is most often presented via a video. However, a direct comparison between these two types of model has rarely been undertaken in a motor context.

Purpose: In this study, we aimed to compare the effectiveness of the observation of a live and a videomodel for the early acquisition of a complex judo movement.

Research Design: Participants observed either a live or a videomodel executing the task. After observation, they practised for three minutes taking five trials and then performed it for analysis. This procedure was repeated three times. The form and technique of each participant's execution were evaluated using a technical score.

Main results: The results indicated a significant improvement in the task execution by the end of the practice session. However, this improvement occurred only for the video-model group between the second and third blocks of practice.

Conclusions: The video demonstration seems more effective than the live one for the early acquisition of a completely new complex coordination. This may be due to the simplification of the visual information in the former condition because of its two-dimensionality. This simplification may allow the observer to identify the more key elements that would guide him/her for the subsequent performance of the task.  相似文献   
997.
Abstract

The aim of this study was to develop a performance test set-up for America's Cup grinders. The test set-up had to mimic the on-boat grinding activity and be capable of collecting data for analysis and evaluation of grinding performance. This study included a literature-based analysis of grinding demands and a test protocol developed to accommodate the necessary physiological loads. This study resulted in a test protocol consisting of 10 intervals of 20 revolutions each interspersed with active resting periods of 50 s. The 20 revolutions are a combination of both forward and backward grinding and an exponentially rising resistance. A custom-made grinding ergometer was developed with computer-controlled resistance and capable of collecting data during the test. The data collected can be used to find measures of grinding performance such as peak power, time to complete and the decline in repeated grinding performance.  相似文献   
998.
属于生产性服务行业的节能服务企业专业技术性强、智力密集度高,寄望于引领我国节能环保这一战略性新兴产业,其快速成长对于促进节能减排有着重要意义。运用理论建模与数理验证相结合的方法,对动力要素对节能服务企业成长绩效的影响进行实证研究。结果显示,对节能服务企业成长最具显著作用的是支持政策,其次是整合能力、技术人才和资金来源。针对这些研究结果提出相应的对策建议。  相似文献   
999.
Student difficulties in science learning are frequently attributed to misconceptions about scientific concepts. We argue that domain‐general perceptual processes may also influence students' ability to learn and demonstrate mastery of difficult science concepts. Using the concept of center of gravity (CoG), we show how student difficulty in applying CoG to an object such as a baseball bat can be accounted for, at least in part, by general principles of perception (i.e., not exclusively physics‐based) that make perceiving the CoG of some objects more difficult than others. In particular, it is perceptually difficult to locate the CoG of objects with asymmetric‐extended properties. The basic perceptual features of objects must be taken into account when assessing students' classroom performance and developing effective science, technology, engineering, and mathematics (STEM) teaching methods.  相似文献   
1000.
This article examines the validity of the Undergraduate Research Student Self-Assessment (URSSA), a survey used to evaluate undergraduate research (UR) programs. The underlying structure of the survey was assessed with confirmatory factor analysis; also examined were correlations between different average scores, score reliability, and matches between numerical and textual item responses. The study found that four components of the survey represent separate but related constructs for cognitive skills and affective learning gains derived from the UR experience. Average scores from item blocks formed reliable but moderate to highly correlated composite measures. Additionally, some questions about student learning gains (meant to assess individual learning) correlated to ratings of satisfaction with external aspects of the research experience. The pattern of correlation among individual items suggests that items asking students to rate external aspects of their environment were more like satisfaction ratings than items that directly ask about student skills attainment. Finally, survey items asking about student aspirations to attend graduate school in science reflected inflated estimates of the proportions of students who had actually decided on graduate education after their UR experiences. Recommendations for revisions to the survey include clarified item wording and increasing discrimination between item blocks through reorganization.Undergraduate research (UR) experiences have long been an important component of science education at universities and colleges but have received greater attention in recent years, as they have been identified as important ways to strengthen preparation for advanced study and work in the science fields, especially among students from underrepresented minority groups (Tsui, 2007 ; Kuh, 2008 ). UR internships provide students with the opportunity to conduct authentic research in laboratories with scientist mentors, as students help design projects, gather and analyze data, and write up and present findings (Laursen et al., 2010 ). The promised benefits of UR experiences include both increased skills and greater familiarity with how science is practiced (Russell et al., 2007 ). While students learn the basics of scientific methods and laboratory skills, they are also exposed to the culture and norms of science (Carlone and Johnson, 2007 ; Hunter et al., 2007 ; Lopatto, 2010 ). Students learn about the day-to-day world of practicing science and are introduced to how scientists design studies, collect and analyze data, and communicate their research. After participating in UR, students may make more informed decisions about their future, and some may be more likely to decide to pursue graduate education in science, technology, engineering, and mathematics (STEM) disciplines (Bauer and Bennett, 2003 ; Russell et al., 2007 ; Eagan et al. 2013 ).While UR experiences potentially have many benefits for undergraduate students, assessing these benefits is challenging (Laursen, 2015 ). Large-scale research-based evaluation of the effects of UR is limited by a range of methodological problems (Eagan et al., 2013 ). True experimental studies are almost impossible to implement, since random assignment of students into UR programs is both logistically and ethically impractical, while many simple comparisons between UR and non-UR groups of students suffer from noncomparable groups and limited generalizability (Maton and Hrabowski, 2004 ). Survey studies often rely on poorly developed measures and use nonrepresentative samples, and large-scale survey research usually requires complex statistical models to control for student self-selection into UR programs (Eagan et al., 2013 ). For smaller-scale program evaluation, evaluators also encounter a number of measurement problems. Because of the wide range of disciplines, research topics, and methods, common standardized tests assessing laboratory skills and understandings across these disciplines are difficult to find. While faculty at individual sites may directly assess products, presentations, and behavior using authentic assessments such as portfolios, rubrics, and performance assessments, these assessments can be time-consuming and not easily comparable with similar efforts at other laboratories (Stokking et al., 2004 ; Kuh et al., 2014 ). Additionally, the affective outcomes of UR are not readily tapped by direct academic assessment, as many of the benefits found for students in UR, such as motivation, enculturation, and self-efficacy, are not measured by tests or other assessments (Carlone and Johnson, 2007 ). Other instruments for assessing UR outcomes, such as Lopatto’s SURE (Lopatto, 2010 ), focus on these affective outcomes rather than direct assessments of skills and cognitive gains.The size of most UR programs also makes assessment difficult. Research Experiences for Undergraduates (REUs), one mechanism by which UR programs may be organized within an institution, are funded by the National Science Foundation (NSF), but unlike many other educational programs at NSF (e.g., TUES) that require fully funded evaluations with multiple sources of evidence (Frechtling, 2010 ), REUs are generally so small that they cannot typically support this type of evaluation unless multiple programs pool their resources to provide adequate assessment. Informal UR experiences, offered to students by individual faculty within their own laboratories, are often more common but are typically not coordinated across departments or institutions or accountable to a central office or agency for assessment. Partly toward this end, the Undergraduate Research Student Self-Assessment (URSSA) was developed as a common assessment instrument that can be compared across multiple UR sites within or across institutions. It is meant to be used as one source of assessment information about UR sites and their students.The current research examines the validity of the URSSA in the context of its use as a self-report survey for UR programs and laboratories. Because the survey has been taken by more than 3400 students, we can test some aspects of how the survey is structured and how it functions. Assessing the validity of the URSSA for its intended use is a process of testing hypotheses about how well the survey represents its intended content. This ongoing process (Messick, 1993 ; Kane, 2001 ) involves gathering evidence from a range of sources to learn whether validity claims are supported by evidence and whether the survey results can be used confidently in specific contexts. For the URSSA, our method of inquiry focuses on how the survey is used to assess consortia of REU sites. In this context, survey results are used for quality assurance and comparisons of average ratings over years and as general indicators of program success in encouraging students to pursue graduate science education and scientific careers. Our research questions focus on the meaning and reliability of “core indicators” used to track self-reported learning gains in four areas and the ability of numerical items to capture student aspirations for future plans to attend graduate school in the sciences.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号