首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   21489篇
  免费   351篇
  国内免费   139篇
教育   14974篇
科学研究   2273篇
各国文化   187篇
体育   1459篇
综合类   274篇
文化理论   100篇
信息传播   2712篇
  2022年   157篇
  2021年   337篇
  2020年   316篇
  2019年   404篇
  2018年   557篇
  2017年   587篇
  2016年   531篇
  2015年   482篇
  2014年   688篇
  2013年   3616篇
  2012年   794篇
  2011年   864篇
  2010年   740篇
  2009年   740篇
  2008年   728篇
  2007年   773篇
  2006年   779篇
  2005年   705篇
  2004年   407篇
  2003年   398篇
  2002年   339篇
  2001年   471篇
  2000年   377篇
  1999年   355篇
  1998年   205篇
  1997年   205篇
  1996年   209篇
  1995年   160篇
  1994年   174篇
  1993年   153篇
  1992年   231篇
  1991年   219篇
  1990年   248篇
  1989年   224篇
  1988年   175篇
  1987年   183篇
  1986年   187篇
  1985年   177篇
  1984年   183篇
  1983年   170篇
  1982年   134篇
  1981年   136篇
  1980年   132篇
  1979年   190篇
  1978年   160篇
  1977年   113篇
  1976年   119篇
  1975年   111篇
  1973年   104篇
  1971年   116篇
排序方式: 共有10000条查询结果,搜索用时 46 毫秒
941.
本文从基层自治的理论和实践出发,探讨高校学生事务管理中心构建的主体、方式和职能等问题,以期对高校学生事务管理中心建立的实践起到积极指导作用,对高校学生工作的理论研究提供新的探索。  相似文献   
942.
我国高校目前普遍采用校院系三级管理制度,二级学院的办公室成为内联上下、外联兄弟院校的重要桥梁,因此,办公室主任作为二级学院(系)办公室的主要负责人,其综合素质的高低将直接影响院校各项工作质量的高低。作为办公室主任,不但要保质保量地完成院校下达的各项任务,而且要充分做好前瞻性工作,可见,能否有条不紊地完成院校整体工作,是衡量办公室主任是否合格的重要指标。为此,本文从政治素质、职业道德素质、业务素质以及创新素质等方面,对高校学院(系)办公室主任素质进行了分析研究。  相似文献   
943.
属于生产性服务行业的节能服务企业专业技术性强、智力密集度高,寄望于引领我国节能环保这一战略性新兴产业,其快速成长对于促进节能减排有着重要意义。运用理论建模与数理验证相结合的方法,对动力要素对节能服务企业成长绩效的影响进行实证研究。结果显示,对节能服务企业成长最具显著作用的是支持政策,其次是整合能力、技术人才和资金来源。针对这些研究结果提出相应的对策建议。  相似文献   
944.
目的:分析过度运动状态下SD大鼠心血管内分泌症状特征。方法:选择雄性SD大鼠40只,依据体重随机划分成安静对照组、中等过度运动组、强过度运动组以及力竭运动组,对于运动组的大鼠,需每天在跑台上完成不同程度的8周过度运动训练,采用仿射免疫法测定大鼠心机细胞中内皮素ET、血管紧张素AGTⅡ、心机细胞膜上受体ETR等内分泌症状特征的变化情况。结果:强过度运动组大鼠的AGTⅡ含量均显著低于安静组(P0.01),力竭运动组大鼠的AGTⅡ含量与安静组无显著性差异;强过度运动组大鼠的ET含量均显著低于安静组大鼠(P0.01);力竭运动组大鼠和中等过度运动组的ET含量和安静组相比并无显著性差异;强过度运动组大鼠的ETR值显著增加(P0.05),力竭运动组大鼠的ETR值显著降低(P0.01);各运动组大鼠的ANP含量和安静组相比均显著增高(P0.05、P0.01),力竭运动组大鼠的ANP含量比安静组显著降低(P0.05)。结论:中等过度运动可明显改善内分泌功能,而过度运动阻碍内分泌系统的功能。  相似文献   
945.
本文以物联网产业三个政策制定过程为研究对象,采用纵贯研究、访谈以及内容分析相结合的定性研究方法,讨论了政府是如何进行战略性新兴产业的政策制定。研究结果显示:政策决策者将战略性新兴产业政策制定过程分割为多个子决策过程,并且在每个子决策中,通过互动形成并实施众多微观决策,同时这些互动发挥了信息获取、行动协调、行动承诺和政策承诺四种功能。  相似文献   
946.
利用德温特世界专利索引,从企业专利权人实力和专利技术两个角度,构建企业专利价值评价体系;核心指标包括专利申请量、施引专利计数、引用专利计数、同族专利数和权利要求项数。目的为揭示专利权人和专利权人前后引证关系对专利价值的影响;并以美国苹果公司公开号为US7233318B1和US7827568B1的专利为例计算其专利价值,验证该模型的合理性和实用性。  相似文献   
947.
期刊论文开放存取(OA)平台已成为科研工作者获取学术信息的重要渠道,对其进行科学的分析与评价可以为我国期刊论文OA平台建设提供有益借鉴。通过专家访谈及问卷调查法,构建了期刊论文OA平台受众满意度指标体系,并基于网络分析法(ANP)给出一种期刊论文OA平台受众满意度的评价方法。在此基础上,以10个管理科学核心刊物为例,对其OA平台的受众满意度予以了系统评价。案例应用结果表明,所给出的评价方法是可行的、有较强的实践应用推广价值。  相似文献   
948.
Student difficulties in science learning are frequently attributed to misconceptions about scientific concepts. We argue that domain‐general perceptual processes may also influence students' ability to learn and demonstrate mastery of difficult science concepts. Using the concept of center of gravity (CoG), we show how student difficulty in applying CoG to an object such as a baseball bat can be accounted for, at least in part, by general principles of perception (i.e., not exclusively physics‐based) that make perceiving the CoG of some objects more difficult than others. In particular, it is perceptually difficult to locate the CoG of objects with asymmetric‐extended properties. The basic perceptual features of objects must be taken into account when assessing students' classroom performance and developing effective science, technology, engineering, and mathematics (STEM) teaching methods.  相似文献   
949.
This article examines the validity of the Undergraduate Research Student Self-Assessment (URSSA), a survey used to evaluate undergraduate research (UR) programs. The underlying structure of the survey was assessed with confirmatory factor analysis; also examined were correlations between different average scores, score reliability, and matches between numerical and textual item responses. The study found that four components of the survey represent separate but related constructs for cognitive skills and affective learning gains derived from the UR experience. Average scores from item blocks formed reliable but moderate to highly correlated composite measures. Additionally, some questions about student learning gains (meant to assess individual learning) correlated to ratings of satisfaction with external aspects of the research experience. The pattern of correlation among individual items suggests that items asking students to rate external aspects of their environment were more like satisfaction ratings than items that directly ask about student skills attainment. Finally, survey items asking about student aspirations to attend graduate school in science reflected inflated estimates of the proportions of students who had actually decided on graduate education after their UR experiences. Recommendations for revisions to the survey include clarified item wording and increasing discrimination between item blocks through reorganization.Undergraduate research (UR) experiences have long been an important component of science education at universities and colleges but have received greater attention in recent years, as they have been identified as important ways to strengthen preparation for advanced study and work in the science fields, especially among students from underrepresented minority groups (Tsui, 2007 ; Kuh, 2008 ). UR internships provide students with the opportunity to conduct authentic research in laboratories with scientist mentors, as students help design projects, gather and analyze data, and write up and present findings (Laursen et al., 2010 ). The promised benefits of UR experiences include both increased skills and greater familiarity with how science is practiced (Russell et al., 2007 ). While students learn the basics of scientific methods and laboratory skills, they are also exposed to the culture and norms of science (Carlone and Johnson, 2007 ; Hunter et al., 2007 ; Lopatto, 2010 ). Students learn about the day-to-day world of practicing science and are introduced to how scientists design studies, collect and analyze data, and communicate their research. After participating in UR, students may make more informed decisions about their future, and some may be more likely to decide to pursue graduate education in science, technology, engineering, and mathematics (STEM) disciplines (Bauer and Bennett, 2003 ; Russell et al., 2007 ; Eagan et al. 2013 ).While UR experiences potentially have many benefits for undergraduate students, assessing these benefits is challenging (Laursen, 2015 ). Large-scale research-based evaluation of the effects of UR is limited by a range of methodological problems (Eagan et al., 2013 ). True experimental studies are almost impossible to implement, since random assignment of students into UR programs is both logistically and ethically impractical, while many simple comparisons between UR and non-UR groups of students suffer from noncomparable groups and limited generalizability (Maton and Hrabowski, 2004 ). Survey studies often rely on poorly developed measures and use nonrepresentative samples, and large-scale survey research usually requires complex statistical models to control for student self-selection into UR programs (Eagan et al., 2013 ). For smaller-scale program evaluation, evaluators also encounter a number of measurement problems. Because of the wide range of disciplines, research topics, and methods, common standardized tests assessing laboratory skills and understandings across these disciplines are difficult to find. While faculty at individual sites may directly assess products, presentations, and behavior using authentic assessments such as portfolios, rubrics, and performance assessments, these assessments can be time-consuming and not easily comparable with similar efforts at other laboratories (Stokking et al., 2004 ; Kuh et al., 2014 ). Additionally, the affective outcomes of UR are not readily tapped by direct academic assessment, as many of the benefits found for students in UR, such as motivation, enculturation, and self-efficacy, are not measured by tests or other assessments (Carlone and Johnson, 2007 ). Other instruments for assessing UR outcomes, such as Lopatto’s SURE (Lopatto, 2010 ), focus on these affective outcomes rather than direct assessments of skills and cognitive gains.The size of most UR programs also makes assessment difficult. Research Experiences for Undergraduates (REUs), one mechanism by which UR programs may be organized within an institution, are funded by the National Science Foundation (NSF), but unlike many other educational programs at NSF (e.g., TUES) that require fully funded evaluations with multiple sources of evidence (Frechtling, 2010 ), REUs are generally so small that they cannot typically support this type of evaluation unless multiple programs pool their resources to provide adequate assessment. Informal UR experiences, offered to students by individual faculty within their own laboratories, are often more common but are typically not coordinated across departments or institutions or accountable to a central office or agency for assessment. Partly toward this end, the Undergraduate Research Student Self-Assessment (URSSA) was developed as a common assessment instrument that can be compared across multiple UR sites within or across institutions. It is meant to be used as one source of assessment information about UR sites and their students.The current research examines the validity of the URSSA in the context of its use as a self-report survey for UR programs and laboratories. Because the survey has been taken by more than 3400 students, we can test some aspects of how the survey is structured and how it functions. Assessing the validity of the URSSA for its intended use is a process of testing hypotheses about how well the survey represents its intended content. This ongoing process (Messick, 1993 ; Kane, 2001 ) involves gathering evidence from a range of sources to learn whether validity claims are supported by evidence and whether the survey results can be used confidently in specific contexts. For the URSSA, our method of inquiry focuses on how the survey is used to assess consortia of REU sites. In this context, survey results are used for quality assurance and comparisons of average ratings over years and as general indicators of program success in encouraging students to pursue graduate science education and scientific careers. Our research questions focus on the meaning and reliability of “core indicators” used to track self-reported learning gains in four areas and the ability of numerical items to capture student aspirations for future plans to attend graduate school in the sciences.  相似文献   
950.
The availability of reliable evidence for teaching practices after professional development is limited across science, technology, engineering, and mathematics disciplines, making the identification of professional development “best practices” and effective models for change difficult. We aimed to determine the extent to which postdoctoral fellows (i.e., future biology faculty) believed in and implemented evidence-based pedagogies after completion of a 2-yr professional development program, Faculty Institutes for Reforming Science Teaching (FIRST IV). Postdocs (PDs) attended a 2-yr training program during which they completed self-report assessments of their beliefs about teaching and gains in pedagogical knowledge and experience, and they provided copies of class assessments and video recordings of their teaching. The PDs reported greater use of learner-centered compared with teacher-centered strategies. These data were consistent with the results of expert reviews of teaching videos. The majority of PDs (86%) received video ratings that documented active engagement of students and implementation of learner-centered classrooms. Despite practice of higher-level cognition in class sessions, the items used by the PDs on their assessments of learning focused on lower-level cognitive skills. We attributed the high success of the FIRST IV program to our focus on inexperienced teachers, an iterative process of teaching practice and reflection, and development of and teaching a full course.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号