首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
随着计算机辅助教学和计算机网络技术的飞速发展,作为计算机辅助教学的核心技术,主动批改技术成为了近年来讨论的热点问题.研究了计算机自动评判系统中主观题的批改问题.针对文字类主观题自动批改技术的难点,提出了一种基于概念图的主观题批改方法.  相似文献   

2.
政治主观题的批改一直困扰着高中政治教师。教师批改主观题耗费了大量时间,牵制了大量精力;更可悲的是学生对教师的批改根本不感兴趣,只是看一眼分数,瞥一眼是  相似文献   

3.
主观题的批改一直是实现无纸化考试系统中的一个非常重要的关键技术。本文避开了常见低效的语义相似批改模式,尝试使用一种基于关键词覆盖的方法来模拟教师评判主观题的思维过程,引入了给分点覆盖区域及其可信度等概念,设计了一套新的主观题的批改方法,并给出了相关的算法。  相似文献   

4.
政治主观题的批改一直困扰着高中政治教师.教师批改主观题耗费了大量时间,牵制了大量精力;更可悲的是学生对教师的批改根本不感兴趣,只是看一眼分数,瞥一眼是勾叉而已,对"订正"两字视而不见,更谈不上细细体会.面对这种教师的劳动失去了应有的价值、学生的答题水平停滞不前的困惑,本人在教学实践中对主观题批改的新方式作了不断探索.  相似文献   

5.
基于Web的教学系统中,针对作业批改对于主观题批改不自然的问题,提出种在Word文档中进行作业手写批改的方法,并加以实现.方法首先应用VC编程工具获取手写批改的轨迹,然后把轨迹在Word中呈现.最后在Word文档中建立一个成绩区,利用字符的分割的识别技术对手写输入的成绩进行识别,并保存到数据库中,以利于后续的成绩管理工...  相似文献   

6.
为帮助教师快速、准确批改PowerPoint操作题,提出基于AutoIt3和VBA的PowerPoint操作题自动批量批改程序.通过对PowerPoint操作题划分评分点和对应分值,采用VBA编写自动批改代码,运用AutoIt3编写自动批量批改代码,实现PowerPoint操作题自动批量批改.通过比较表明,相较于教师人工批量批改,机器自动批量批改大大提高了批改效率和批改质量.  相似文献   

7.
正方昆阳(广州市电化教育馆)我认为一对一数字化学习中数字化支持应具备以下功能:1、提供网络学习空间和各类SNS等工具,为师生泛在交流提供支持;2、计算机智能分析支持下的教学反馈和评价,如客观题的自动批改和反馈,主观题如口语、写作、设计等的智能化测评等。3、体现计算机的仿真和虚拟现实功能,提供沉浸式的网络学习环境,取代传统的情景教学。  相似文献   

8.
叶青  吴夏南  王娟娟 《海外英语》2014,(13):106-108
该文通过教学实验和问卷调查,将在线自动批改系统——句酷批改网应用到大学英语写作教学,探索自动批改与教师批改有效结合的多元批改模式,分析在线自动批改系统对大学生英语写作能力发展的影响。研究结果表明,通过一个学期的教学实践,实验组和对照组的英语写作水平都有了明显的提高,但是组间的差异并不显著,问卷调查结果显示批改网具有及时反馈和按句点评的优势,能够帮助学生增加词汇量、减少语法错误、规范英文写作,但在评价谋篇布局、遣词造句等方面较为薄弱,需要结合教师的指导和反馈。  相似文献   

9.
在线考试系统中主观题自动阅卷的设计   总被引:1,自引:0,他引:1  
研究有关在线考试系统的阅卷系统,分析主观题人工阅卷的思维习惯。根据模糊数学中贴近度理论和单向贴近度的理论,设计一个关键字匹配和关键字贴近度匹配相结合的主观题自动评分的算法。  相似文献   

10.
本文借鉴应用程序自动化方法,基于AutoIt3和VBA,设计了一种针对Word操作题的自动批量批改程序框架,首先依据Word操作技能考核要求对Word操作题各小题划分评分点和对应分值,然后应用VBA实现单个学生Word操作题自动批改程序,最后应用AutoIt3实现多个学生Word操作题自动批量批改程序.  相似文献   

11.
Scientific argumentation is one of the core practices for teachers to implement in science classrooms. We developed a computer-based formative assessment to support students’ construction and revision of scientific arguments. The assessment is built upon automated scoring of students’ arguments and provides feedback to students and teachers. Preliminary validity evidence was collected in this study to support the use of automated scoring in this formative assessment. The results showed satisfactory psychometric properties related to this formative assessment. The automated scores showed satisfactory agreement with human scores, but small discrepancies still existed. Automated scores and feedback encouraged students to revise their answers. Students’ scientific argumentation skills improved during the revision process. These findings provided preliminary evident to support the use of automated scoring in the formative assessment to diagnose and enhance students’ argumentation skills in the context of climate change in secondary school science classrooms.  相似文献   

12.
This article explores the suitability of static analysis techniques based on the abstract syntax tree (AST) for the automated assessment of early/mid degree level programming. Focus is on fairness, timeliness and consistency of grades and feedback. Following investigation into manual marking practises, including a survey of markers, the assessment of 97 student Java programming submissions is automated using static analysis rules. Initially, no correlation between human provided marks and rule violations is found. This paper investigates why, and considers several improvements to the approaches used for applying static analysis rules. New methods for application are explored and the resulting technique is applied to a second exercise with 95 submissions. The results show a stronger positive correlation with manual assessment, whilst retaining advantages in terms of time cost, pedagogic advantages and instant feedback. This study provides insight into the differences between human assessment and static analysis approaches and highlights several potential pitfalls of simplistic implementations. Finally, this paper concludes that static analysis approaches are appropriate for automated assessment; however, these approaches should be used with care.  相似文献   

13.
《教育实用测度》2013,26(3):281-299
The growing use of computers for test delivery, along with increased interest in performance assessments, has motivated test developers to develop automated systems for scoring complex constructed-response assessment formats. In this article, we add to the available information describing the performance of such automated scoring systems by reporting on generalizability analyses of expert ratings and computer-produced scores for a computer-delivered performance assessment of physicians' patient management skills. Two different automated scoring systems were examined. These automated systems produced scores that were approximately as generalizable as those produced by expert raters. Additional analyses also suggested that the traits assessed by the expert raters and the automated scoring systems were highly related (i.e., true correlations between test forms, across scoring methods, were approximately 1.0). In the appendix, we discuss methods for estimating this correlation, using ratings and scores produced by an automated system from a single test form.  相似文献   

14.
A framework for evaluation and use of automated scoring of constructed‐response tasks is provided that entails both evaluation of automated scoring as well as guidelines for implementation and maintenance in the context of constantly evolving technologies. Consideration of validity issues and challenges associated with automated scoring are discussed within the framework. The fit between the scoring capability and the assessment purpose, the agreement between human and automated scores, the consideration of associations with independent measures, the generalizability of automated scores as implemented in operational practice across different tasks and test forms, and the impact and consequences for the population and subgroups are proffered as integral evidence supporting use of automated scoring. Specific evaluation guidelines are provided for using automated scoring to complement human scoring for tests used for high‐stakes purposes. These guidelines are intended to be generalizable to new automated scoring systems and as existing systems change over time.  相似文献   

15.
席小明 《中国考试》2021,(5):56-62,71
随着人工智能技术的快速发展,人工智能技术在教育中的应用越来越广泛,很多科技公司投入资金开发人工智能技术应用于教育测评和学习领域,但是这些技术和产品的质量良莠不齐,用户很难辨别,这种情况在某种程度上不利于教育人工智能产业的发展。为帮助用户评估人工智能技术应用于教育测评和学习领域的产品质量,提出自动生成考题或学习材料、自适应学习、自动评分和自动反馈4项技术的评估方法,从决策者和用户的角度对如何评估教育人工智能技术给出建议。  相似文献   

16.
Much recent psychometric literature has focused on cognitive diagnosis models (CDMs), a promising class of instruments used to measure the strengths and weaknesses of examinees. This article introduces a genetic algorithm to perform automated test assembly alongside CDMs. The algorithm is flexible in that it can be applied whether the goal is to minimize the average number of classification errors, minimize the maximum error rate across all attributes being measured, hit a target set of error rates, or optimize any other prescribed objective function. Under multiple simulation conditions, the algorithm compared favorably with a standard method of automated test assembly, successfully finding solutions that were appropriate for each stated goal.  相似文献   

17.
In this digital ITEMS module, Dr. Sue Lottridge, Amy Burkhardt, and Dr. Michelle Boyer provide an overview of automated scoring. Automated scoring is the use of computer algorithms to score unconstrained open-ended test items by mimicking human scoring. The use of automated scoring is increasing in educational assessment programs because it allows scores to be returned faster at lower cost. In the module, they discuss automated scoring from a number of perspectives. First, they discuss benefits and weaknesses of automated scoring, and what psychometricians should know about automated scoring. Next, they describe the overall process of automated scoring, moving from data collection to engine training to operational scoring. Then, they describe how automated scoring systems work, including the basic functions around score prediction as well as other flagging methods. Finally, they conclude with a discussion of the specific validity demands around automated scoring and how they align with the larger validity demands around test scores. Two data activities are provided. The first is an interactive activity that allows the user to train and evaluate a simple automated scoring engine. The second is a worked example that examines the impact of rater error on test scores. The digital module contains a link to an interactive web application as well as its R-Shiny code, diagnostic quiz questions, activities, curated resources, and a glossary.  相似文献   

18.
For automated vehicles,comfortable driving will improve passengers’ satisfaction.Reducing fuel consumption brings economic profits for car owners,decreases the impact on the environment and increases energy sustainability.In addition to comfort and fuel-economy,automated vehicles also have the basic requirements of safety and car-following.For this purpose,an adaptive cruise control (ACC) algorithm with multi-objectives is proposed based on a model predictive control (MPC) framework.In the proposed ACC algorithm,safety is guaranteed by constraining the inter-distance within a safe range; the requirements of comfort and car-following are considered to be the performance criteria and some optimal reference trajectories are introduced to increase fuel-economy.The performances of the proposed ACC algorithm are simulated and analyzed in five representative traffic scenarios and multiple experiments.The results show that not only are safety and car-following objectives satisfied,but also driving comfort and fuel-economy are improved significantly.  相似文献   

19.
自动作文评分系统的技术优势为英语写作教学模式的创新改革提供一个良好的平台。本研究对基于自动作文评分系统的英语写作教学模式进行了设计与教学实践,包括写前阶段、初稿和同伴互评阶段、修改和自动评阋阶段、课堂讲评和定稿阶段的设计。为期一年的写作教学实验表明:新的写作教学模式督促学生写,保持写作的频率,激发学生的写作兴趣,培养学生自主写作能力,提高学生英语写作水平。  相似文献   

20.
This article presents considerations for using automated scoring systems to evaluate second language writing. A distinction is made between English language learners in English-medium educational systems and those studying English in their own countries for a variety of purposes, and between learning-to-write and writing-to-learn in a second language (Manchón, 2011a), extending Manchón's framework from instruction to assessment and drawing implications for construct definition. Next, an approach to validity based on articulating an interpretive argument is presented and discussed with reference to a recent study of the use of e-rater on the TOEFL. Challenges and opportunities for the use of automated scoring system are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号