首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   273篇
  免费   2篇
  国内免费   1篇
教育   153篇
科学研究   64篇
体育   12篇
文化理论   2篇
信息传播   45篇
  2023年   29篇
  2022年   18篇
  2021年   22篇
  2020年   52篇
  2019年   32篇
  2018年   20篇
  2017年   17篇
  2016年   13篇
  2015年   6篇
  2014年   12篇
  2013年   30篇
  2012年   6篇
  2011年   2篇
  2010年   4篇
  2009年   3篇
  2008年   1篇
  2007年   1篇
  2005年   2篇
  2003年   1篇
  2002年   2篇
  2001年   2篇
  1998年   1篇
排序方式: 共有276条查询结果,搜索用时 15 毫秒
261.
Formative assessment is considered to be helpful in students' learning support and teaching design. Following Aufschnaiter's and Alonzo's framework, formative assessment practices of teachers can be subdivided into three practices: eliciting evidence, interpreting evidence and responding. Since students' conceptions are judged to be important for meaningful learning across disciplines, teachers are required to assess their students' conceptions. The focus of this article lies on the discussion of learning analytics for supporting the assessment of students' conceptions in class. The existing and potential contributions of learning analytics are discussed related to the named formative assessment framework in order to enhance the teachers' options to consider individual students' conceptions. We refer to findings from biology and computer science education on existing assessment tools and identify limitations and potentials with respect to the assessment of students' conceptions.

Practitioner notes

What is already known about this topic
  • Students' conceptions are considered to be important for learning processes, but interpreting evidence for learning with respect to students' conceptions is challenging for teachers.
  • Assessment tools have been developed in different educational domains for teaching practice.
  • Techniques from artificial intelligence and machine learning have been applied for automated assessment of specific aspects of learning.
What does the paper add
  • Findings on existing assessment tools from two educational domains are summarised and limitations with respect to assessment of students' conceptions are identified.
  • Relevent data that needs to be analysed for insights into students' conceptions is identified from an educational perspective.
  • Potential contributions of learning analytics to support the challenging task to elicit students' conceptions are discussed.
Implications for practice and/or policy
  • Learning analytics can enhance the eliciting of students' conceptions.
  • Based on the analysis of existing works, further exploration and developments of analysis techniques for unstructured text and multimodal data are desirable to support the eliciting of students' conceptions.
  相似文献   
262.
Learning analytics is a fast-growing discipline. Institutions and countries alike are racing to harness the power of using data to support students, teachers and stakeholders. Research in the field has proven that predicting and supporting underachieving students is worthwhile. Nonetheless, challenges remain unresolved, for example, lack of generalizability, portability and failure to advance our understanding of students' behaviour. Recently, interest has grown in modelling individual or within-person behaviour, that is, understanding the person-specific changes. This study applies a novel method that combines within-person with between-person variance to better understand how changes unfolding at the individual level can explain students' final grades. By modelling the within-person variance, we directly model where the process takes place, that is the student. Our study finds that combining within- and between-person variance offers a better explanatory power and a better guidance of the variables that could be targeted for intervention at the personal and group levels. Furthermore, using within-person variance opens the door for person-specific idiographic models that work on individual student data and offer students support based on their own insights.

Practitioner notes

What is already known about this topic
  • Predicting students' performance has commonly been implemented using cross-sectional data at the group level.
  • Predictive models help predict and explain student performance in individual courses but are hard to generalize.
  • Heterogeneity has been a major factor in hindering cross-course or context generalization.
What this paper adds
  • Intra-individual (within-person) variations can be modelled using repeated measures data.
  • Hybrid between–within-person models offer more explanatory and predictive power of students' performance.
  • Intra-individual variations do not mirror interindividual variations, and thus, generalization is not warranted.
  • Regularity is a robust predictor of student performance at both the individual and the group levels.
Implications for practice
  • The study offers a method for teachers to better understand and predict students' performance.
  • The study offers a method of identifying what works on a group or personal level.
  • Intervention at the personal level can be more effective when using within-person predictors and at the group level when using between-person predictors.
  相似文献   
263.
This article reports on a trace-based assessment of approaches to learning used by middle school aged children who interacted with NASA Mars Mission science, technology, engineering and mathematics (STEM) games in Whyville, an online game environment with 8 million registered young learners. The learning objectives of two games included awareness and knowledge of NASA missions, developing knowledge and skills of measurement and scaling, applying measurement for planetary comparisons in the solar system. Trace data from 1361 interactions were analysed with nonparametric multidimensional scaling methods, which permitted visual examination and statistical validation, and provided an example and proof of concept for the multidimensional scaling approach to analysis of time-based behavioural data from a game or simulation. Differences in approach to learning were found illustrating the potential value of the methodology to curriculum and game-based learning designers as well as other creators of online STEM content for pre-college youth. The theoretical framework of the method and analysis makes use of the Epistemic Network Analysis toolkit as a post hoc data exploration platform, and the discussion centres on issues of semantic interpretation of interaction end-states and the application of evidence centred design in post hoc analysis.

Practitioner notes

What is already known about this topic
  • Educational game play has been demonstrated to positively affect learning performance and learning persistence.
  • Trace-based assessment from digital learning environments can focus on learning outcomes and processes drawn from user behaviour and contextual data.
  • Existing approaches used in learning analytics do not (fully) meet criteria commonly used in psychometrics or for different forms of validity in assessment, even though some consider learning analytics a form of assessment in the broadest sense.
  • Frameworks of knowledge representation in trace-based research often include concepts from cognitive psychology, education and cognitive science.
What this paper adds
  • To assess skills-in-action, stronger connections of learning analytics with educational measurement can include parametric and nonparametric statistics integrated with theory-driven modelling and semantic network analysis approaches widening the basis for inferences, validity, meaning and understanding from digital traces.
  • An expanded methodological foundation is offered for analysis in which nonparametric multidimensional scaling, multimodal analysis, epistemic network analysis and evidence-centred design are combined.
Implications for practice and policy
  • The new foundations are suggested as a principled, theory-driven, embedded data collection and analysis framework that provides structure for reverse engineering of semantics as well as pre-planning frameworks that support creative freedom in the processes of creation of digital learning environments.
  相似文献   
264.
265.
Game-based assessment (GBA), a specific application of games for learning, has been recognized as an alternative form of assessment. While there is a substantive body of literature that supports the educational benefits of GBA, limited work investigates the validity and generalizability of such systems. In this paper, we describe applications of learning analytics methods to provide evidence for psychometric qualities of a digital GBA called Shadowspect, particularly to what extent Shadowspect is a robust assessment tool for middle school students' spatial reasoning skills. Our findings indicate that Shadowspect is a valid assessment for spatial reasoning skills, and it has comparable precision for both male and female students. In addition, students' enjoyment of the game is positively related to their overall competency as measured by the game regardless of the level of their existing spatial reasoning skills.

Practitioner notes

What is already known about this topic:
  • Digital games can be a powerful context to support and assess student learning.
  • Games as assessments need to meet certain psychometric qualities such as validity and generalizability.
  • Learning analytics provide useful ways to establish assessment models for educational games, as well as to investigate their psychometric qualities.
What this paper adds:
  • How a digital game can be coupled with learning analytics practices to assess spatial reasoning skills.
  • How to evaluate psychometric qualities of game-based assessment using learning analytics techniques.
  • Investigation of validity and generalizability of game-based assessment for spatial reasoning skills and the interplay of the game-based assessment with enjoyment.
Implications for practice and/or policy:
  • Game-based assessments that incorporate learning analytics can be used as an alternative to pencil-and-paper tests to measure cognitive skills such as spatial reasoning.
  • More training and assessment of spatial reasoning embedded in games can motivate students who might not be on the STEM tracks, thus broadening participation in STEM.
  • Game-based learning and assessment researchers should consider possible factors that affect how certain populations of students enjoy educational games, so it does not further marginalize specific student populations.
  相似文献   
266.
267.
An extraordinary amount of data is becoming available in educational settings, collected from a wide range of Educational Technology tools and services. This creates opportunities for using methods from Artificial Intelligence and Learning Analytics (LA) to improve learning and the environments in which it occurs. And yet, analytics results produced using these methods often fail to link to theoretical concepts from the learning sciences, making them difficult for educators to trust, interpret and act upon. At the same time, many of our educational theories are difficult to formalise into testable models that link to educational data. New methodologies are required to formalise the bridge between big data and educational theory. This paper demonstrates how causal modelling can help to close this gap. It introduces the apparatus of causal modelling, and shows how it can be applied to well-known problems in LA to yield new insights. We conclude with a consideration of what causal modelling adds to the theory-versus-data debate in education, and extend an invitation to other investigators to join this exciting programme of research.

Practitioner notes

What is already known about this topic

  • ‘Correlation does not equal causation’ is a familiar claim in many fields of research but increasingly we see the need for a causal understanding of our educational systems.
  • Big data bring many opportunities for analysis in education, but also a risk that results will fail to replicate in new contexts.
  • Causal inference is a well-developed approach for extracting causal relationships from data, but is yet to become widely used in the learning sciences.

What this paper adds

  • An overview of causal modelling to support educational data scientists interested in adopting this promising approach.
  • A demonstration of how constructing causal models forces us to more explicitly specify the claims of educational theories.
  • An understanding of how we can link educational datasets to theoretical constructs represented as causal models so formulating empirical tests of the educational theories that they represent.

Implications for practice and/or policy

  • Causal models can help us to explicitly specify educational theories in a testable format.
  • It is sometimes possible to make causal inferences from educational data if we understand our system well enough to construct a sufficiently explicit theoretical model.
  • Learning Analysts should work to specify more causal models and test their predictions, as this would advance our theoretical understanding of many educational systems.
  相似文献   
268.
Traditional item analyses such as classical test theory (CTT) use exam-taker responses to assessment items to approximate their difficulty and discrimination. The increased adoption by educational institutions of electronic assessment platforms (EAPs) provides new avenues for assessment analytics by capturing detailed logs of an exam-taker's journey through their exam. This paper explores how logs created by EAPs can be employed alongside exam-taker responses and CTT to gain deeper insights into exam items. In particular, we propose an approach for deriving features from exam logs for approximating item difficulty and discrimination based on exam-taker behaviour during an exam. Items for which difficulty and discrimination differ significantly between CTT analysis and our approach are flagged through outlier detection for independent academic review. We demonstrate our approach by analysing de-identified exam logs and responses to assessment items of 463 medical students enrolled in a first-year biomedical sciences course. The analysis shows that the number of times an exam-taker visits an item before selecting a final response is a strong indicator of an item's difficulty and discrimination. Scrutiny by the course instructor of the seven items identified as outliers suggests our log-based analysis can provide insights beyond what is captured by traditional item analyses.

Practitioner notes

What is already known about this topic
  • Traditional item analysis is based on exam-taker responses to the items using mathematical and statistical models from classical test theory (CTT). The difficulty and discrimination indices thus calculated can be used to determine the effectiveness of each item and consequently the reliability of the entire exam.
What this paper adds
  • Data extracted from exam logs can be used to identify exam-taker behaviours which complement classical test theory in approximating the difficulty and discrimination of an item and identifying items that may require instructor review.
Implications for practice and/or policy
  • Identifying the behaviours of successful exam-takers may allow us to develop effective exam-taking strategies and personal recommendations for students.
  • Analysing exam logs may also provide an additional tool for identifying struggling students and items in need of revision.
  相似文献   
269.
This study analyses the potential of a learning analytics (LA) based formative assessment to construct personalised teaching sequences in Mathematics for 5th-grade primary school students. A total of 127 students from Spanish public schools participated in the study. The quasi-experimental study was conducted over the course of six sessions, in which both control and experimental groups participated in a teaching sequence based on mathematical problems. In each session, both groups used audience response systems to record their responses to mathematical tasks about fractions. After each session, students from the control group were given generic homework on fractions—the same activities for all the participants—while students from the experimental group were given a personalised set of activities. The provision of personalised homework was based on the students' errors detected from the use of the LA-based formative assessment. After the intervention, the results indicate a higher student level of understanding of the concept of fractions in the experimental group compared to the control group. Related to motivational dimensions, results indicated that instruction using audience response systems has a positive effect compared to regular mathematics classes.

Practitioner notes

What is already known about this topic
  • Developing an understanding of fractions is one of the most challenging concepts in elementary mathematics and a solid predictor of future achievements in mathematics.
  • Learning analytics (LA) has the potential to provide quality, functional data for assessing and supporting learners' difficulties.
  • Audience response systems (ARS) are one of the most practical ways to collect data for LA in classroom environments.
  • There is a scarcity of field research implementations on LA mediated by ARS in real contexts of elementary school classrooms.
What this paper adds
  • Empirical evidence about how LA-based formative assessments can enable personalised homework to support student understanding of fractions.
  • Personalised homework based on an LA-based formative assessment improves the students' comprehension of fractions.
  • Using ARS for the teaching of fractions has a positive effect in terms of student motivation.
Implications for practice and/or policy
  • Teachers should be given LA/ARS tools that allow them to quickly provide students with personalised mathematical instruction.
  • Researchers should continue exploring these potentially beneficial educational implementations in other areas.
  相似文献   
270.
This paper presents an approach to measuring business sentiment based on textual data. Business sentiment has been measured by traditional surveys, which are costly and time-consuming to conduct. To address the issues, we take advantage of daily newspaper articles and adopt a self-attention-based model to define a business sentiment index, named S-APIR, where outlier detection models are investigated to properly handle various genres of news articles. Moreover, we propose a simple approach to temporally analyzing how much any given event contributed to the predicted business sentiment index. To demonstrate the validity of the proposed approach, an extensive analysis is carried out on 12 years’ worth of newspaper articles. The analysis shows that the S-APIR index is strongly and positively correlated with established survey-based index (up to correlation coefficient r=0.937) and that the outlier detection is effective especially for a general newspaper. Also, S-APIR is compared with a variety of economic indices, revealing the properties of S-APIR that it reflects the trend of the macroeconomy as well as the economic outlook and sentiment of economic agents. Moreover, to illustrate how S-APIR could benefit economists and policymakers, several events are analyzed with respect to their impacts on business sentiment over time.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号