期刊文献+

On the Linear Convergence of a Proximal Gradient Method for a Class of Nonsmooth Convex Minimization Problems 被引量:4

原文传递
导出
摘要 We consider a class of nonsmooth convex optimization problems where the objective function is the composition of a strongly convex differentiable function with a linear mapping,regularized by the sum of both l1-norm and l2-norm of the optimization variables.This class of problems arise naturally from applications in sparse group Lasso,which is a popular technique for variable selection.An effective approach to solve such problems is by the Proximal Gradient Method(PGM).In this paper we prove a local error bound around the optimal solution set for this problem and use it to establish the linear convergence of the PGM method without assuming strong convexity of the overall objective function.
出处 《Journal of the Operations Research Society of China》 EI 2013年第2期163-186,共24页 中国运筹学会会刊(英文)
基金 This work was partially supported by the National Natural Science Foundation of China(Nos.61179033,DMS-1015346)。
  • 相关文献

引证文献4

二级引证文献3

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部