全文获取类型
收费全文 | 21489篇 |
免费 | 351篇 |
国内免费 | 139篇 |
专业分类
教育 | 14974篇 |
科学研究 | 2273篇 |
各国文化 | 187篇 |
体育 | 1459篇 |
综合类 | 274篇 |
文化理论 | 100篇 |
信息传播 | 2712篇 |
出版年
2022年 | 157篇 |
2021年 | 337篇 |
2020年 | 316篇 |
2019年 | 404篇 |
2018年 | 557篇 |
2017年 | 587篇 |
2016年 | 531篇 |
2015年 | 482篇 |
2014年 | 688篇 |
2013年 | 3616篇 |
2012年 | 794篇 |
2011年 | 864篇 |
2010年 | 740篇 |
2009年 | 740篇 |
2008年 | 728篇 |
2007年 | 773篇 |
2006年 | 779篇 |
2005年 | 705篇 |
2004年 | 407篇 |
2003年 | 398篇 |
2002年 | 339篇 |
2001年 | 471篇 |
2000年 | 377篇 |
1999年 | 355篇 |
1998年 | 205篇 |
1997年 | 205篇 |
1996年 | 209篇 |
1995年 | 160篇 |
1994年 | 174篇 |
1993年 | 153篇 |
1992年 | 231篇 |
1991年 | 219篇 |
1990年 | 248篇 |
1989年 | 224篇 |
1988年 | 175篇 |
1987年 | 183篇 |
1986年 | 187篇 |
1985年 | 177篇 |
1984年 | 183篇 |
1983年 | 170篇 |
1982年 | 134篇 |
1981年 | 136篇 |
1980年 | 132篇 |
1979年 | 190篇 |
1978年 | 160篇 |
1977年 | 113篇 |
1976年 | 119篇 |
1975年 | 111篇 |
1973年 | 104篇 |
1971年 | 116篇 |
排序方式: 共有10000条查询结果,搜索用时 46 毫秒
941.
本文从基层自治的理论和实践出发,探讨高校学生事务管理中心构建的主体、方式和职能等问题,以期对高校学生事务管理中心建立的实践起到积极指导作用,对高校学生工作的理论研究提供新的探索。 相似文献
942.
我国高校目前普遍采用校院系三级管理制度,二级学院的办公室成为内联上下、外联兄弟院校的重要桥梁,因此,办公室主任作为二级学院(系)办公室的主要负责人,其综合素质的高低将直接影响院校各项工作质量的高低。作为办公室主任,不但要保质保量地完成院校下达的各项任务,而且要充分做好前瞻性工作,可见,能否有条不紊地完成院校整体工作,是衡量办公室主任是否合格的重要指标。为此,本文从政治素质、职业道德素质、业务素质以及创新素质等方面,对高校学院(系)办公室主任素质进行了分析研究。 相似文献
943.
属于生产性服务行业的节能服务企业专业技术性强、智力密集度高,寄望于引领我国节能环保这一战略性新兴产业,其快速成长对于促进节能减排有着重要意义。运用理论建模与数理验证相结合的方法,对动力要素对节能服务企业成长绩效的影响进行实证研究。结果显示,对节能服务企业成长最具显著作用的是支持政策,其次是整合能力、技术人才和资金来源。针对这些研究结果提出相应的对策建议。 相似文献
944.
目的:分析过度运动状态下SD大鼠心血管内分泌症状特征。方法:选择雄性SD大鼠40只,依据体重随机划分成安静对照组、中等过度运动组、强过度运动组以及力竭运动组,对于运动组的大鼠,需每天在跑台上完成不同程度的8周过度运动训练,采用仿射免疫法测定大鼠心机细胞中内皮素ET、血管紧张素AGTⅡ、心机细胞膜上受体ETR等内分泌症状特征的变化情况。结果:强过度运动组大鼠的AGTⅡ含量均显著低于安静组(P0.01),力竭运动组大鼠的AGTⅡ含量与安静组无显著性差异;强过度运动组大鼠的ET含量均显著低于安静组大鼠(P0.01);力竭运动组大鼠和中等过度运动组的ET含量和安静组相比并无显著性差异;强过度运动组大鼠的ETR值显著增加(P0.05),力竭运动组大鼠的ETR值显著降低(P0.01);各运动组大鼠的ANP含量和安静组相比均显著增高(P0.05、P0.01),力竭运动组大鼠的ANP含量比安静组显著降低(P0.05)。结论:中等过度运动可明显改善内分泌功能,而过度运动阻碍内分泌系统的功能。 相似文献
945.
946.
947.
948.
Jason R. Sattizahn Daniel J. Lyons Carly Kontra Susan M. Fischer Sian L. Beilock 《Mind, Brain, and Education》2015,9(3):164-169
Student difficulties in science learning are frequently attributed to misconceptions about scientific concepts. We argue that domain‐general perceptual processes may also influence students' ability to learn and demonstrate mastery of difficult science concepts. Using the concept of center of gravity (CoG), we show how student difficulty in applying CoG to an object such as a baseball bat can be accounted for, at least in part, by general principles of perception (i.e., not exclusively physics‐based) that make perceiving the CoG of some objects more difficult than others. In particular, it is perceptually difficult to locate the CoG of objects with asymmetric‐extended properties. The basic perceptual features of objects must be taken into account when assessing students' classroom performance and developing effective science, technology, engineering, and mathematics (STEM) teaching methods. 相似文献
949.
The Undergraduate Research Student Self-Assessment (URSSA): Validation for Use in Program Evaluation
This article examines the validity of the Undergraduate Research Student Self-Assessment (URSSA), a survey used to evaluate undergraduate research (UR) programs. The underlying structure of the survey was assessed with confirmatory factor analysis; also examined were correlations between different average scores, score reliability, and matches between numerical and textual item responses. The study found that four components of the survey represent separate but related constructs for cognitive skills and affective learning gains derived from the UR experience. Average scores from item blocks formed reliable but moderate to highly correlated composite measures. Additionally, some questions about student learning gains (meant to assess individual learning) correlated to ratings of satisfaction with external aspects of the research experience. The pattern of correlation among individual items suggests that items asking students to rate external aspects of their environment were more like satisfaction ratings than items that directly ask about student skills attainment. Finally, survey items asking about student aspirations to attend graduate school in science reflected inflated estimates of the proportions of students who had actually decided on graduate education after their UR experiences. Recommendations for revisions to the survey include clarified item wording and increasing discrimination between item blocks through reorganization.Undergraduate research (UR) experiences have long been an important component of science education at universities and colleges but have received greater attention in recent years, as they have been identified as important ways to strengthen preparation for advanced study and work in the science fields, especially among students from underrepresented minority groups (Tsui, 2007 ; Kuh, 2008 ). UR internships provide students with the opportunity to conduct authentic research in laboratories with scientist mentors, as students help design projects, gather and analyze data, and write up and present findings (Laursen et al., 2010 ). The promised benefits of UR experiences include both increased skills and greater familiarity with how science is practiced (Russell et al., 2007 ). While students learn the basics of scientific methods and laboratory skills, they are also exposed to the culture and norms of science (Carlone and Johnson, 2007 ; Hunter et al., 2007 ; Lopatto, 2010 ). Students learn about the day-to-day world of practicing science and are introduced to how scientists design studies, collect and analyze data, and communicate their research. After participating in UR, students may make more informed decisions about their future, and some may be more likely to decide to pursue graduate education in science, technology, engineering, and mathematics (STEM) disciplines (Bauer and Bennett, 2003 ; Russell et al., 2007 ; Eagan et al. 2013 ).While UR experiences potentially have many benefits for undergraduate students, assessing these benefits is challenging (Laursen, 2015 ). Large-scale research-based evaluation of the effects of UR is limited by a range of methodological problems (Eagan et al., 2013 ). True experimental studies are almost impossible to implement, since random assignment of students into UR programs is both logistically and ethically impractical, while many simple comparisons between UR and non-UR groups of students suffer from noncomparable groups and limited generalizability (Maton and Hrabowski, 2004 ). Survey studies often rely on poorly developed measures and use nonrepresentative samples, and large-scale survey research usually requires complex statistical models to control for student self-selection into UR programs (Eagan et al., 2013 ). For smaller-scale program evaluation, evaluators also encounter a number of measurement problems. Because of the wide range of disciplines, research topics, and methods, common standardized tests assessing laboratory skills and understandings across these disciplines are difficult to find. While faculty at individual sites may directly assess products, presentations, and behavior using authentic assessments such as portfolios, rubrics, and performance assessments, these assessments can be time-consuming and not easily comparable with similar efforts at other laboratories (Stokking et al., 2004 ; Kuh et al., 2014 ). Additionally, the affective outcomes of UR are not readily tapped by direct academic assessment, as many of the benefits found for students in UR, such as motivation, enculturation, and self-efficacy, are not measured by tests or other assessments (Carlone and Johnson, 2007 ). Other instruments for assessing UR outcomes, such as Lopatto’s SURE (Lopatto, 2010 ), focus on these affective outcomes rather than direct assessments of skills and cognitive gains.The size of most UR programs also makes assessment difficult. Research Experiences for Undergraduates (REUs), one mechanism by which UR programs may be organized within an institution, are funded by the National Science Foundation (NSF), but unlike many other educational programs at NSF (e.g., TUES) that require fully funded evaluations with multiple sources of evidence (Frechtling, 2010 ), REUs are generally so small that they cannot typically support this type of evaluation unless multiple programs pool their resources to provide adequate assessment. Informal UR experiences, offered to students by individual faculty within their own laboratories, are often more common but are typically not coordinated across departments or institutions or accountable to a central office or agency for assessment. Partly toward this end, the Undergraduate Research Student Self-Assessment (URSSA) was developed as a common assessment instrument that can be compared across multiple UR sites within or across institutions. It is meant to be used as one source of assessment information about UR sites and their students.The current research examines the validity of the URSSA in the context of its use as a self-report survey for UR programs and laboratories. Because the survey has been taken by more than 3400 students, we can test some aspects of how the survey is structured and how it functions. Assessing the validity of the URSSA for its intended use is a process of testing hypotheses about how well the survey represents its intended content. This ongoing process (Messick, 1993 ; Kane, 2001 ) involves gathering evidence from a range of sources to learn whether validity claims are supported by evidence and whether the survey results can be used confidently in specific contexts. For the URSSA, our method of inquiry focuses on how the survey is used to assess consortia of REU sites. In this context, survey results are used for quality assurance and comparisons of average ratings over years and as general indicators of program success in encouraging students to pursue graduate science education and scientific careers. Our research questions focus on the meaning and reliability of “core indicators” used to track self-reported learning gains in four areas and the ability of numerical items to capture student aspirations for future plans to attend graduate school in the sciences. 相似文献
950.
Diane Ebert-May Terry L. Derting Timothy P. Henkel Jessica Middlemis Maher Jennifer L. Momsen Bryan Arnold Heather A. Passmore 《CBE life sciences education》2015,14(2)
The availability of reliable evidence for teaching practices after professional development is limited across science, technology, engineering, and mathematics disciplines, making the identification of professional development “best practices” and effective models for change difficult. We aimed to determine the extent to which postdoctoral fellows (i.e., future biology faculty) believed in and implemented evidence-based pedagogies after completion of a 2-yr professional development program, Faculty Institutes for Reforming Science Teaching (FIRST IV). Postdocs (PDs) attended a 2-yr training program during which they completed self-report assessments of their beliefs about teaching and gains in pedagogical knowledge and experience, and they provided copies of class assessments and video recordings of their teaching. The PDs reported greater use of learner-centered compared with teacher-centered strategies. These data were consistent with the results of expert reviews of teaching videos. The majority of PDs (86%) received video ratings that documented active engagement of students and implementation of learner-centered classrooms. Despite practice of higher-level cognition in class sessions, the items used by the PDs on their assessments of learning focused on lower-level cognitive skills. We attributed the high success of the FIRST IV program to our focus on inexperienced teachers, an iterative process of teaching practice and reflection, and development of and teaching a full course. 相似文献