首页 | 本学科首页   官方微博 | 高级检索  
     

基于熵因子惩罚项的提高前馈网络泛化能力的新算法(英文)
引用本文:鲁子奕,杨绿溪,何振亚. 基于熵因子惩罚项的提高前馈网络泛化能力的新算法(英文)[J]. 东南大学学报, 1998, 0(1)
作者姓名:鲁子奕  杨绿溪  何振亚
作者单位:东南大学无线电系DSP实验室
摘    要:在利用神经网络提取含噪数据特征时,泛化能力是一个极需解决的问题.通常的方法是控制网络的复杂度。本文中,我们提出了一类基于熵因子的惩罚项,该项正确反映了隐层节点所提取的数据特征,从而有效避免了前馈网络的过训练现象,计算机仿真结果表明该算法能大大提高网络泛化能力.

关 键 词:泛化能力,过训练,熵

Using Entropy Penalty Term for Improving the Generalization Ability of Multilayer Feedfoward Networks *
Lu Ziyi Yang Luxi He Zhenya. Using Entropy Penalty Term for Improving the Generalization Ability of Multilayer Feedfoward Networks *[J]. Journal of Southeast University(English Edition), 1998, 0(1)
Authors:Lu Ziyi Yang Luxi He Zhenya
Affiliation:Lu Ziyi ** Yang Luxi He Zhenya
Abstract:Generalization ability is a major problem encountered when using neural networks to find the structures in noisy data sets. Controlling the network complexity is a common method to solve this problem. In this paper, however, a novel additive penalty term which represents the features extracted by hidden units is introduced to eliminate the overtraining of multilayer feedfoward networks. Computer simulations demonstrate that by using this unsupervised fashion penalty term, the generalization ability is greatly improved.
Keywords:generalization   overtraining   entropy
本文献已被 CNKI 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号