首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
张雪  陈秀娟  张志强 《现代情报》2018,38(12):151-163
[目的/意义]梳理近年来国际上医学信息学研究动态和发展趋势,为科研人员提供相关的文献信息,为科研课题的选择及信息研究提供有力的依据,为我国未来医学信息学发展提供建议。[方法/过程]以Web of Science中10种医学信息学核心期刊近十年的文献为信息源,利用BICOMB、TDA、SPSS、UCINET等统计分析工具,并结合文献计量方法,对检索出来的文献数量、国家、机构、核心作者、关键词等进行统计并可视化分析。[结果/结论]2008-2017年Web of Science数据库中十种医学信息学核心期刊总收录相关文献11 823篇,文献量逐年持续增长,美国在该方面的研究处于领导地位,哈佛大学在该领域的研究产出占比最高,核心作者之间联系密切,形成一定规模的小团体。研究发现,近十年国际医学信息学研究重点体现在4个方面:卫生信息系统的开发与管理,卫生信息分析方法的对象与实际应用,人工智能、数据挖掘技术在临床诊疗中的应用与挑战及医学信息学新技术的应用与发展。  相似文献   

2.
顾天阳  赵旺  曹林 《情报科学》2022,40(3):40-44
【目的/意义】医疗健康大数据为智慧医疗提供了前所未有的机遇。然而“数据烟囱”“信息孤岛”和低效的知识服务方法严重阻碍医疗健康服务模式创新。如何通过医疗健康大数据深度聚合和动态知识服务,实现面向全方位全周期智慧医疗服务的知识管理创新成为当前医疗信息资源管理领域的重要问题。【方法/过程】介绍了一种面向大规模多源异构医疗健康数据安全共享的联邦学习机制和深度聚合方法,提出了人机协同的医疗案例库构建方法和基于杰卡德距离算法的医疗案例知识推理方法。【结果/结论】该方法为智慧诊疗、临床教学和辅助科研提供了一体化知识管理服务框架。【创新/局限】该方法不仅为智慧医疗与精准健康管理提供了一种数据管理方法体系,还为5P智慧医疗服务新模式构建提供了新的思路。  相似文献   

3.
赵月华  朱思成  苏新宁 《情报科学》2021,39(12):165-173
【 目的/意义】解决获取虚假网络医疗信息数据集时专业知识不足的问题,帮助在小样本领域构建虚假网络 医疗信息识别模型。【方法/过程】本文提出一种基于权威辟谣信息转化提取构建网络虚假医疗信息数据集的思路, 并依次构建传统机器学习模型、CNN模型和BERT模型进行分类识别。【结果/结论】结果表明,基于辟谣信息能够 实现以较低成本、不依赖专家标注构建虚假医疗信息数据集。通过对比实验发现,基于微博数据预训练的 BERT 模型准确率为 95.91%,F1值为 94.57%,相比于传统机器学习模型和 CNN模型提升分别接近 6%和 4%,表明本文构 建的基于预训练的BERT模型在网络虚假医疗信息识别任务上取得了更好的效果。【创新/局限】本文提出的方法能 以较低成本建立专业领域的虚假信息数据集,所构建的BERT虚假医疗信息识别模型在小样本领域也具有实用价 值,但在数据集规模、深度学习模型对比、模型性能评价指标等方面还有待拓展与延伸。  相似文献   

4.
盛姝  黄奇  郭进京  解绮雯  杨洋 《情报科学》2022,40(5):161-172
【目的/意义】作为医疗与管理科学领域最为重要的研究课题之一,在线健康社区智能诊疗在我国“互联网+医疗”新业态发展背景下扮演重要角色。【方法/过程】本文从本体论与CBR视角出发,构建基于知识库与案例库的在线健康社区诊疗解决方案自动推理模型;通过八爪鱼采集器获取“好大夫在线”疾病科普以及医患问答数据构建本体,并利用文本分析挖掘出可解释的疾病知识及解决方案,分别实现知识库与案例库的半自动构建;以成年人先天性心脏病为例,将ACHD-AP进行形式化定义,采用推理引擎对患者案例进行风险分类及划分至对应的疾病知识库,并实现诊疗解决方案的自动推理。【结果/结论】研究显示,本文诊疗解决方案推理结论与专家建议相似度较高,且OntoQA评估下的知识库以及案例库本体层次结构关系合理。【创新/局限】基于知识库与案例库的诊疗解决方案自动推理模型为后续在线健康社区实现智能诊疗以及服务模式的创新提供了方法上的参考。  相似文献   

5.
针对钢板表面缺陷图像分类传统深度学习算法中需要大量标签数据的问题,提出一种基于主动学习的高效分类方法。该方法包含一个轻量级的卷积神经网络和一个基于不确定性的主动学习样本筛选策略。神经网络采用简化的convolutional base进行特征提取,然后用全局池化层替换掉传统密集连接分类器中的隐藏层来减轻过拟合。为了更好的衡量模型对未标签图像样本所属类别的不确定性,首先将未标签图像样本传入到用标签图像样本训练好的模型,得到模型对每一个未标签样本关于标签的概率分布(probability distribution over classes, PDC),然后用此模型对标签样本进行预测并得到模型对每个标签的平均PDC。将两类分布的KL-divergence值作为不确定性指标来筛选未标签图像进行人工标注。根据在NEU-CLS开源缺陷数据集上的对比实验,该方法可以通过44%的标签数据实现97%的准确率,极大降低标注成本。  相似文献   

6.
The measures typically used to assess binary classification problems fail to incorporate the uncertainty inherent to many contexts into the results. We propose using a Bayesian model to express the uncertainty in binary classification problems. This study identified 10 previous studies that provided sufficient data to demonstrate the use of Bayesian analysis in Information Systems (IS) contexts with varying levels of uncertainty. The analysis and user study show that the addition of Bayesian analysis is most useful in high uncertainty contexts with a wide interval for positive predictive value. Such an interval will lead to high uncertainty, even with very certain sensitivity and specificity. The usefulness of Bayesian analysis in conditions of medium uncertainty depends on the context. In conditions of low uncertainty, Bayesian analysis does not add much value. The user study showed that presenting models with uncertainty changed researcher perception of which model performed the best with 18 of 21 researchers changing their opinion. We recommend that authors estimate the uncertainty in their models and provide confusion matrices and prevalence estimates in their results to enable Bayesian analysis as research in a domain matures.  相似文献   

7.
情报学发展面临的困境,要求研究者从根本上反思情报学的现状、发展水平与生存方式.研究表明.情报学理论思维的缺失是制约情报学研究进展的重要因素,情报学研究的实质性进展必须诉诸于情报学理论思维.因此,检视情报学发展史,反思情报学研究存在的问题,进而变革情报学研究的路径已成为当前情报学研究的一个重要问题.  相似文献   

8.
【目的/意义】本文基于颜色、纹理等外部特征与局部视觉特征构成的底层语义特征数据并采用随机森林的方法对医学图像信息进行语义自动标注,为医务工作者提供临床决策参考,便于普通公众理解医学知识和了解个人健康情况,也可以在大数据环境下扩展图书情报领域研究人员对信息组织与处理的范围,促进学科交叉与融合,提升智慧医学的发展,为健康中国战略提供智力与技术支持。【方法/过程】融合图书情报领域知识与医学知识,将图像语义标注看作为一个多类分类问题,首先,抽取颜色、纹理等外部特征及局部视觉特征等底层语义特征;然后,运用随机森林的方法,设计了基于随机森林的医学图像自动标注方案。【结果/结论】融合底层语义特征的医学图像信息自动标注的方案与随机树标注方案相比较,具有较好的效果。【创新/局限】将视觉语义词典作为医学图像的底层语义特征引入到图像标注中;运用随机森林构建的医学图像标注方案;局限在于仅采用BreaKHis数据集为实验数据。  相似文献   

9.
组织创新影响机制中各变量具有不确定性与动态性等特点,可以尝试进行贝叶斯网络分析。根据贝叶斯网络的基本原理,构建了基于贝叶斯网络的组织创新影响机制模型,并对复杂贝叶斯网络计算问题的简化问题进行了探讨。实例应用表明,该方法克服了其他传统分析方法局限于线性、静态分析的缺点,较为准确地反映了组织创新影响机制各变量间的动态关系。  相似文献   

10.
11.
薛素美 《科教文汇》2014,(6):211-212
随着我国医疗卫生服务模式的变化以及我国医学高等教育大幅度扩招,高校医学专业毕业生就业难现象逐渐凸显,但是随着我国医疗改革不断深入,基层医疗保健单位及医疗健康等新兴医学相关行业的兴起,为医学生就业提供了前所未有的机遇和挑战。本文以徐州医学院近三年就业情况为切入点,分析医学生就业难、就业面狭窄的原因,并提出医学生应该放宽就业视野,转变传统就业观念,积极投身到基层及新兴医学相关行业的建议。  相似文献   

12.
The combination of large open data sources with machine learning approaches presents a potentially powerful way to predict events such as protest or social unrest. However, accounting for uncertainty in such models, particularly when using diverse, unstructured datasets such as social media, is essential to guarantee the appropriate use of such methods. Here we develop a Bayesian method for predicting social unrest events in Australia using social media data. This method uses machine learning methods to classify individual postings to social media as being relevant, and an empirical Bayesian approach to calculate posterior event probabilities. We use the method to predict events in Australian cities over a period in 2017/18.  相似文献   

13.
【目的/意义】深度学习是近几年来人工智能领域的研究热点之一,了解深度学习在信息组织与检索方面的研究现状,能为信息组织与检索的深入研究提供参考和借鉴。【方法/内容】通过对国内基于深度学习的信息组织与检索方向的相关文献进行梳理,剖析深度学习相关模型、阐述深度学习在信息组织与检索中的研究热点主题,并结合深度学习技术的特点和信息组织与检索的研究内容,对深度学习在信息组织与检索方向的应用前景进行预测。【结果/结论】研究表明,当前深度学习在信息组织与检索中的研究热点主要集中在智能信息抽取、自动文本分类、情感分析和文本聚类这四个主题,预测未来深度学习在信息组织与检索方向会朝着对异构信息处理、智能信息检索、个性化信息推荐等方向发展。  相似文献   

14.
With the advent of Web 2.0, there exist many online platforms that results in massive textual data production such as social networks, online blogs, magazines etc. This textual data carries information that can be used for betterment of humanity. Hence, there is a dire need to extract potential information out of it. This study aims to present an overview of approaches that can be applied to extract and later present these valuable information nuggets residing within text in brief, clear and concise way. In this regard, two major tasks of automatic keyword extraction and text summarization are being reviewed. To compile the literature, scientific articles were collected using major digital computing research repositories. In the light of acquired literature, survey study covers early approaches up to all the way till recent advancements using machine learning solutions. Survey findings conclude that annotated benchmark datasets for various textual data-generators such as twitter and social forms are not available. This scarcity of dataset has resulted into relatively less progress in many domains. Also, applications of deep learning techniques for the task of automatic keyword extraction are relatively unaddressed. Hence, impact of various deep architectures stands as an open research direction. For text summarization task, deep learning techniques are applied after advent of word vectors, and are currently governing state-of-the-art for abstractive summarization. Currently, one of the major challenges in these tasks is semantic aware evaluation of generated results.  相似文献   

15.
Unmanned surface vehicles (USVs) are a promising marine robotic platform for numerous potential applications in ocean space due to their small size, low cost, and high autonomy. Modelling and control of USVs is a challenging task due to their intrinsic nonlinearities, strong couplings, high uncertainty, under-actuation, and multiple constraints. Well designed motion controllers may not be effective when exposed in the complex and dynamic sea environment. The paper presents a fully data-driven learning-based motion control method for an USV based on model-based deep reinforcement learning. Specifically, we first train a data-driven prediction model based on a deep network for the USV by using recorded input and output data. Based on the learned prediction model, model predictive motion controllers are presented for achieving trajectory tracking and path following tasks. It is shown that after learning with random data collected from the USV, the proposed data-driven motion controller is able to follow trajectories or parameterized paths accurately with excellent sample efficiency. Simulation results are given to illustrate the proposed deep reinforcement learning scheme for fully data-driven motion control without any a priori model information of the USV.  相似文献   

16.
基于BERT的领域本体分类关系自动识别研究   总被引:1,自引:0,他引:1       下载免费PDF全文
【目的/意义】实现对领域本体分类关系的自动学习识别,解决领域本体知识框架结构体系的自动化构建问 题。【方法/过程】通过对领域本体分类关系自动识别的国内外研究现状及存在问题进行分析总结,以当前开源的先 进的深度学习文本预训练模型BERT为基础,研究构建了基于BERT的领域本体分类关系自动识别模型,并以资源 环境学科领域为例进行了实验研究和评估分析。【结果/结论】基于BERT构建的分类模型能够实现对领域本体分类 关系的自动识别,识别方法和流程具有极大地通用性和可移植性,识别精度比传统方法有了较大提升。【创新/局 限】微调与泛化了BERT,提高了领域本体分类关系识别模型的通用性和精度。但由于受分类标注语料的质量限 制,模型精度尚未达到峰值,有待进一步优化提升。  相似文献   

17.
Section identification is an important task for library science, especially knowledge management. Identifying the sections of a paper would help filter noise in entity and relation extraction. In this research, we studied the paper section identification problem in the context of Chinese medical literature analysis, where the subjects, methods, and results are more valuable from a physician's perspective. Based on previous studies on English literature section identification, we experiment with the effective features to use with classic machine learning algorithms to tackle the problem. It is found that Conditional Random Fields, which consider sentence interdependency, is more effective in combining different feature sets, such as bag-of-words, part-of-speech, and headings, for Chinese literature section identification. Moreover, we find that classic machine learning algorithms are more effective than generic deep learning models for this problem. Based on these observations, we design a novel deep learning model, the Structural Bidirectional Long Short-Term Memory (SLSTM) model, which models word and sentence interdependency together with the contextual information. Experiments on a human-curated asthma literature dataset show that our approach outperforms the traditional machine learning methods and other deep learning methods and achieves close to 90% precision and recall in the task. The model shows good potential for use in other text mining tasks. The research has significant methodological and practical implications.  相似文献   

18.
熊回香  汪玲  汪琦遇 《情报科学》2022,40(10):12-19
【目的/意义】将关联数据对于多源异构数据组织的优势应用在电子病历资源共享中,实现电子病历资源的共享。【方法/过程】在分析了电子病历资源共享难题以及关联数据的技术特点基础上,提出基于关联数据的电子病历资源的处理流程,进而构建基于关联数据的电子病历资源共享模型,并选取某医院部分电子病历资源数据进行实证研究。【结果/结论】实证研究结果表明,关联数据能够实现电子病历资源的共享。基于关联数据的电子病历资源共享模型能够较好的实现电子病历资源的共享及信息扩展,改善资源之间无法互联互通问题,提升诊疗、检索效率。【创新/局限】本文为国内复杂汉语环境下电子病历资源共享提供可行方案,未来还应进行更大覆盖范围、更多数据量的实证,并从标准化、隐私保护方面进行扩展研究。  相似文献   

19.
Medical question and answering is a crucial aspect of medical artificial intelligence, as it aims to enhance the efficiency of clinical diagnosis and improve treatment outcomes. Despite the numerous methods available for medical question and answering, they tend to overlook the data generation mechanism’s imbalance and the pseudo-correlation caused by the task’s text characteristics. This pseudo-correlation is due to the fact that many words in the question and answering task are irrelevant to the answer but carry significant weight. These words can affect the feature representation and establish a false correlation with the final answer. Furthermore, the data imbalance mechanism can cause the model to blindly follow a large number of classes, leading to bias in the final answer. Confounding factors, including the data imbalance mechanism, bias due to textual characteristics, and other unknown factors, may also mislead the model and limit its performance.In this study, we propose a new counterfactual-based approach that includes a feature encoder and a counterfactual decoder. The feature encoder utilizes ChatGPT and label resetting techniques to create counterfactual data, compensating for distributional differences in the dataset and alleviating data imbalance issues. Moreover, the sampling prior to label resetting also helps us alleviate the data imbalance issue. Subsequently, label resetting can yield better and more balanced counterfactual data. Additionally, the construction of counterfactual data aids the subsequent counterfactual classifier in better learning causal features. The counterfactual decoder uses counterfactual data compared with real data to optimize the model and help it acquire the causal characteristics that genuinely influence the label to generate the final answer. The proposed method was tested on PubMedQA, a medical dataset, using machine learning and deep learning models. The comprehensive experiments demonstrate that this method achieves state-of-the-art results and effectively reduces the false correlation caused by confounders.  相似文献   

20.
This paper is part of a series examining the fundamental nature of informatics: a term used as a convenient umbrella term to stand for the overlapping disciplinary areas of information systems, information management and information technology. The aim of the current paper is to consider some of the universal features of information technology. This is accomplished in terms of a conceptual framework established in previous work. We ground the discussion in a significant historical case: that of Hollerith's electric tabulating system which constituted one of the earliest examples of automatic data processing. Through examination of this case and the technologies used we establish an interpretation of the essence of information technology in terms of formative acts of data representation and processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号