期刊文献+

Robust multi-layer extreme learning machine using bias-variance tradeoff 被引量:1

基于偏差-方差权衡的多层鲁棒极限学习机模型
在线阅读 下载PDF
导出
摘要 As a new neural network model,extreme learning machine(ELM)has a good learning rate and generalization ability.However,ELM with a single hidden layer structure often fails to achieve good results when faced with large-scale multi-featured problems.To resolve this problem,we propose a multi-layer framework for the ELM learning algorithm to improve the model’s generalization ability.Moreover,noises or abnormal points often exist in practical applications,and they result in the inability to obtain clean training data.The generalization ability of the original ELM decreases under such circumstances.To address this issue,we add model bias and variance to the loss function so that the model gains the ability to minimize model bias and model variance,thus reducing the influence of noise signals.A new robust multi-layer algorithm called ML-RELM is proposed to enhance outlier robustness in complex datasets.Simulation results show that the method has high generalization ability and strong robustness to noise. 极限学习机(ELM)作为一种新型的神经网络模型,具有良好的学习速度和泛化能力。然而,单隐层结构的ELM在面临大规模多特征问题时往往不能取得良好的效果,为了解决这个问题,提出一个新型的多层ELM学习算法框架来提高模型的泛化能力。此外,在实际应用中,经常会因为出现噪声或异常点而导致训练数据被污染,面对被污染的数据,普通的ELM的泛化能力会下降。为了解决这个问题,利用偏差-方差分解理论,在损失函数中加入模型的偏差和方差,使模型获得最小化模型偏差和模型方差的能力,从而降低噪声信号的影响。我们提出一种新的鲁棒多层算法ML-RELM,来提升在含有离群点的复杂数据集中的鲁棒性。仿真结果表明,该方法具有较强的泛化能力和较强的抗噪声能力。
作者 YU Tian-jun YAN Xue-feng 俞天钧;颜学峰(Key Laboratory of Advanced Control and Optimization for Chemical Processes of Ministry of Education,East China University of Science and Technology,Shanghai 200237,China)
出处 《Journal of Central South University》 SCIE EI CAS CSCD 2020年第12期3744-3753,共10页 中南大学学报(英文版)
基金 Project(21878081)supported by the National Natural Science Foundation of China Project(222201917006)supported by the Fundamental Research Funds for the Central Universities,China。
关键词 extreme learning machine deep neural network ROBUSTNESS unsupervised feature learning 极限学习机 深度神经网络 鲁棒性 无监督学习
  • 相关文献

参考文献1

二级参考文献17

  • 1Hornik K. Approximation capabilities of multilayer feedforward networks. Neural Networks, 1991, 4(2): 251-257.
  • 2Leshno M, Lin V Y, Pinkus A, Schocken S. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Networks, 1993, 6(6) : 861-867.
  • 3Huang G-B, Babri H A. Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions. IEEE Transactions on Neural Networks, 1998, 9(1): 224-229.
  • 4Huang G-B. Learning capability and storage capacity of two hidden-layer feedforward networks. IEEE Transactions on Neural Networks, 2003, 14(2): 274-281.
  • 5Huang G-B, Zhu Q-Y, Siew C-K. Extreme learning machine: Theory and applications. Neurocomputing, 2006, 70 (1-3): 489-501.
  • 6Vapnik V N. The Nature of Statistical Learning Theory. New York: Springer, 1995.
  • 7Rousseeuw P J, Leroy A. Robust Regression and Outlier Detection. New York: Wiley, 1987.
  • 8Rumelhart D E, McClelland J L. Parallel Distributed Processing. Cambridge.. MIT Press, 1986, 1(2): 125-187.
  • 9Cristianini N, Shawe-Taylor J. An Introduction to Support Vector Machines. Cambridge: Cambridge University Press, 2000.
  • 10Tamura S, Tateishi M. Capabilities of a four-layered feedforward neural network: Four layers versus three. IEEE Transactions on Neural Networks, 1997, 8(2): 251-255.

共引文献162

同被引文献15

引证文献1

二级引证文献17

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部