Full-Text Search:
Home|Journal Papers|About CNKI|User Service|FAQ|Contact Us|中文
《Journal of Data Acquisition and Processing》 2017-01
Add to Favorite Get Latest Update

Individual Convergence of Stochastic Optimization Methods in Machine Learning

Tao Qing;Ma Po;Zhang Menghan;Tao Wei;11st Department,Army Officer Academy of PLA;College of Command System,The PLA University of Science and Technology;  
The stochastic optimization algorithm is one of the state-of-the-art methods for solving largescale machine learning problems,where the focus is on whether or not the optimal convergence rate is derived and the learning structure is ensured.So far,various kinds of stochastic optimization algorithms have been presented for solving the regularized loss problems.However,most of them only discuss the convergence in terms of the averaged output,and even the simplest sparsity cannot be preserved.In contrast to the averaged output,the individual solution can keep the sparsity very well,and its optimal convergence rate is extensively explored as an open problem.On the other hand,the commonly-used assumption about unbiased gradient in stochastic optimization often does not hold in practice.In such cases,an astonishing fact is that the bias in the convergence bound of accelerated algorithms will accumulate with the iteration,and this makes the accelerated algorithms inapplicable.In this paper,an overview of the state-of-the-art and existing problems about the stochastic first-order gradient methods is given,which includes the individual convergence rate,biased gradient and nonconvex problems.Based on it,some interesting problems for future research are indicated.
【Fund】: 国家自然科学基金(61673394 61273296)资助项目
【CateGory Index】: TP181
Download(CAJ format) Download(PDF format)
CAJViewer7.0 supports all the CNKI file formats; AdobeReader only supports the PDF format.
©2006 Tsinghua Tongfang Knowledge Network Technology Co., Ltd.(Beijing)(TTKN) All rights reserved