首页 | 本学科首页   官方微博 | 高级检索  
     


The future of standardised assessment: Validity and trust in algorithms for assessment and scoring
Authors:Cesare Aloisi
Affiliation:AQA, Manchester, UK
Abstract:This article considers the challenges of using artificial intelligence (AI) and machine learning (ML) to assist high-stakes standardised assessment. It focuses on the detrimental effect that even state-of-the-art AI and ML systems could have on the validity of national exams of secondary education, and how lower validity would negatively affect trust in the system. To reach this conclusion, three unresolved issues in AI (unreliability, low explainability and bias) are addressed, to show how each of them would compromise the interpretations and uses of exam results (i.e., exam validity). Furthermore, the article relates validity to trust, and specifically to the ABI+ model of trust. Evidence gathered as part of exam validation supports each of the four trust-enabling components of the ABI+ model (ability, benevolence, integrity and predictability). It is argued, therefore, that the three AI barriers to exam validity limit the extent to which an AI-assisted exam system could be trusted. The article suggests that addressing the issues of AI unreliability, low explainability and bias should be sufficient to put AI-assisted exams on par with traditional ones, but might not go as far as fully reassure the public. To achieve this, it is argued that changes to the quality assurance mechanisms of the exam system will be required. This may involve, for example, integrating principled AI frameworks in assessment policy and regulation.
Keywords:artificial intelligence  automated assessment  bias  education  trust  United Kingdom
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号