首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   903篇
  免费   38篇
  国内免费   27篇
教育   606篇
科学研究   219篇
体育   29篇
综合类   52篇
信息传播   62篇
  2024年   6篇
  2023年   23篇
  2022年   55篇
  2021年   106篇
  2020年   100篇
  2019年   48篇
  2018年   10篇
  2017年   8篇
  2016年   10篇
  2015年   14篇
  2014年   29篇
  2013年   32篇
  2012年   39篇
  2011年   47篇
  2010年   23篇
  2009年   36篇
  2008年   42篇
  2007年   60篇
  2006年   45篇
  2005年   54篇
  2004年   33篇
  2003年   30篇
  2002年   29篇
  2001年   23篇
  2000年   21篇
  1999年   13篇
  1998年   2篇
  1997年   11篇
  1996年   11篇
  1995年   2篇
  1994年   2篇
  1992年   1篇
  1990年   1篇
  1989年   2篇
排序方式: 共有968条查询结果,搜索用时 15 毫秒
961.
新一代信息技术的应用和发展是实现应急管理现代化的重要支撑。大数据、人工智能等新一代信息技术已经在自然灾害、安全生产等领域得到应用,提高了政府监测预警、监管执法、应急指挥决策辅助、救援实战和社会动员能力,改进了企业本质安全水平,也为新冠肺炎疫情的精准防控提供了重要支撑,提升了应急管理的效能,增强了公众的安全感。在以中国式现代化全面推进中华民族伟大复兴的过程中,从以新安全格局保障新发展格局的战略需求出发,新一代信息技术的发展和应用不仅要满足单点、具体的业务需求,还需要更加重视非常规突发事件的应急管理,更加凸显总体国家安全观的价值引领,更加注重对中国应急管理制度优势的支撑,更加强调数据资源的标准衔接、开放创新与智能化利用,以降解不确定性为主线,实现信息技术与应急管理的协同演化。着力于研究,新一代信息技术赋能应急管理现代化还需要助力自主知识建构、推进学科交叉融合、引领信息技术创新、促进产业繁荣发展,为实现中国式现代化作出更大的贡献。  相似文献   
962.
为探究医学人工智能发展态势以及我国医学人工智能存在的问题,从医学人工智能领域论文、临床试验、医疗器械产品3个维度进行态势分析,量化分析结果显示我国和全球医学人工智能发表论文数量均呈迅速增长趋势,发文数量及高被引论文方面我国处于国际前列;人工智能临床试验开展数量和面向疾病种类较多,但临床试验注册质量有待提高;我国人工智能医疗器械获批数量逐年增加,医疗人工智能产品认证稳步推进。医学人工智能发展的同时在理论层面、技术层面、伦理法律层面以及应用层面依旧面临着诸多问题和挑战。  相似文献   
963.
20世纪30年代以来,海洋牧场的建设先后经历了以农牧化和工程化为特征的海洋牧场1.0(即传统海洋牧场)阶段和以生态化和信息化为特征的海洋牧场2.0(即海洋生态牧场)阶段,如今即将进入以数字化和体系化为特征的海洋牧场3.0(即涵盖淡水和海洋的全域型水域生态牧场)阶段。海洋牧场3.0必须坚持“生态、精准、智能、融合”的现代化水域生态牧场发展理念,以保护与利用并进、场景空间拓展、核心技术突破、发展模式创新为特征,构建科学选址—规划布局—生境修复—资源养护—安全保障—融合发展的全链条产业技术发展格局,打造北方海洋牧场“现代升级版”,拓展南方海洋牧场“战略新空间”,开启水域生态牧场“淡水新试点”,支撑国家级海洋牧场示范区建设,引领国际现代化水域生态牧场建设与发展。  相似文献   
964.
为强化广深两地人工智能产业“双城”联动,引领带动粤港澳大湾区实现区域协同发展,本文基于区域产业协同理论,经综合分析两地该产业创新平台、企业、专利等相关数据,得出广深人工智能产业协同发展存在两地专利合作量较低,专利合作意愿下降,产学研协同创新较少,应用场景及数据中心共建共享有待深化等短板,通过深入分析制约该产业两地协同的主要因素,本文从战略协同、创新链协作、产业链耦合、应用场景共建共享等四方面提出了完善广深人工智能产业生态体系的对策及建议。  相似文献   
965.
This article considers the challenges of using artificial intelligence (AI) and machine learning (ML) to assist high-stakes standardised assessment. It focuses on the detrimental effect that even state-of-the-art AI and ML systems could have on the validity of national exams of secondary education, and how lower validity would negatively affect trust in the system. To reach this conclusion, three unresolved issues in AI (unreliability, low explainability and bias) are addressed, to show how each of them would compromise the interpretations and uses of exam results (i.e., exam validity). Furthermore, the article relates validity to trust, and specifically to the ABI+ model of trust. Evidence gathered as part of exam validation supports each of the four trust-enabling components of the ABI+ model (ability, benevolence, integrity and predictability). It is argued, therefore, that the three AI barriers to exam validity limit the extent to which an AI-assisted exam system could be trusted. The article suggests that addressing the issues of AI unreliability, low explainability and bias should be sufficient to put AI-assisted exams on par with traditional ones, but might not go as far as fully reassure the public. To achieve this, it is argued that changes to the quality assurance mechanisms of the exam system will be required. This may involve, for example, integrating principled AI frameworks in assessment policy and regulation.  相似文献   
966.
Advancements in artificial intelligence are rapidly increasing. The new-generation large language models, such as ChatGPT and GPT-4, bear the potential to transform educational approaches, such as peer-feedback. To investigate peer-feedback at the intersection of natural language processing (NLP) and educational research, this paper suggests a cross-disciplinary framework that aims to facilitate the development of NLP-based adaptive measures for supporting peer-feedback processes in digital learning environments. To conceptualize this process, we introduce a peer-feedback process model, which describes learners' activities and textual products. Further, we introduce a terminological and procedural scheme that facilitates systematically deriving measures to foster the peer-feedback process and how NLP may enhance the adaptivity of such learning support. Building on prior research on education and NLP, we apply this scheme to all learner activities of the peer-feedback process model to exemplify a range of NLP-based adaptive support measures. We also discuss the current challenges and suggest directions for future cross-disciplinary research on the effectiveness and other dimensions of NLP-based adaptive support for peer-feedback. Building on our suggested framework, future research and collaborations at the intersection of education and NLP can innovate peer-feedback in digital learning environments.

Practitioner notes

What is already known about this topic
  • There is considerable research in educational science on peer-feedback processes.
  • Natural language processing facilitates the analysis of students' textual data.
  • There is a lack of systematic orientation regarding which NLP techniques can be applied to which data to effectively support the peer-feedback process.
What this paper adds
  • A comprehensive overview model that describes the relevant activities and products in the peer-feedback process.
  • A terminological and procedural scheme for designing NLP-based adaptive support measures.
  • An application of this scheme to the peer-feedback process results in exemplifying the use cases of how NLP may be employed to support each learner activity during peer-feedback.
Implications for practice and/or policy
  • To boost the effectiveness of their peer-feedback scenarios, instructors and instructional designers should identify relevant leverage points, corresponding support measures, adaptation targets and automation goals based on theory and empirical findings.
  • Management and IT departments of higher education institutions should strive to provide digital tools based on modern NLP models and integrate them into the respective learning management systems; those tools should help in translating the automation goals requested by their instructors into prediction targets, take relevant data as input and allow for evaluating the predictions.
  相似文献   
967.
Artificial intelligence (AI) is increasingly integrating into our society. University education needs to maintain its relevance in an AI-mediated world, but the higher education sector is only beginning to engage deeply with the implications of AI within society. We define AI according to a relational epistemology, where, in the context of a particular interaction, a computational artefact provides a judgement about an optimal course of action and that this judgement cannot be traced. Therefore, by definition, AI must always act as a ‘black box’. Rather than seeking to explain ‘black boxes’, we argue that a pedagogy for an AI-mediated world involves learning to work with opaque, partial and ambiguous situations, which reflect the entangled relationships between people and technologies. Such a pedagogy asks learners locate AI as socially bounded, where AI is always understood within the contexts of its use. We outline two particular approaches to achieve this: (a) orienting students to quality standards that surround AIs, what might be called the tacit and explicit ‘rules of the game’; and (b) providing meaningful interactions with AI systems.

Practitioner notes

What is already known about this topic
  • Artificial intelligence (AI) is conceptualised in many different ways but is rarely defined in the higher education literature.
  • Experts have outlined a range of graduate capabilities for working in a world of AI such as teamwork or ethical thinking.
  • The higher education literature outlines an imperative need to respond to AI, as underlined by recent commentary on ChatGPT.
What this paper adds
  • A definition of an AI that is relational: A particular interaction where a computational artefact provides a judgement about an optimal course of action, which cannot be easily traced.
  • Focusing on working with AI black boxes rather than trying to see inside the technology.
  • Describing a pedagogy for an AI-mediated world that promotes working in complex situations with partial and indeterminate information.
Implications for practice and/or policy
  • Focusing on quality standards helps learners understand the social regulating boundaries around AI.
  • Promoting learner interactions with AI as part of a sociotechnical ensemble helps build evaluative judgement in weighting AI's contribution to work.
  • Asking learners to work with AI systems prompts understanding of the evaluative, ethical and practical necessities of working with a black box.
  相似文献   
968.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号