摘要
传统的分布式网络流量优化问题大都通过对偶梯度下降算法来解决,虽然该算法能够以分布式方式来实现,但其收敛速度较慢。加速对偶下降(accelerated dual descent,ADD)算法通过近似牛顿步长的分布式计算,提高了对偶梯度下降算法的收敛速率。但由于通信网络的不确定性,在约束不确定时,该算法的收敛性难以保证。基于此,提出了一种随机形式的ADD算法来解决该网络优化问题。理论上证明了随机ADD算法在不确定性的均方误差有界时,能以较高概率收敛于最优值的一个误差邻域;当给出更严格的不确定性的约束条件时,算法则可以较高概率收敛于最优值。实验结果表明,随机ADD算法的收敛速率比随机梯度下降算法快2个数量级。
Traditional network optimization problems are always solved by the dual gradient descent algorithm, which al- though can be implemented in a distributed manner, has a slow convergence rate. The accelerated dual descent (ADD) al- gorithms improve the convergence rate of dual gradient descent algorithm through distributed computation of approximated Newton steps. But with the uncertainty of communication networks, the convergence of the algorithm cannot be guaranteed under uncertain constraints. Based on this, a stochastic version of ADD algorithm is proposed to solve the network optimiza- tion problems under uncertainty. It is proved theoretically that the stochastic ADD algorithms can almost surely converge to an error neighborhood of the optimal when the mean square error of the uncertainty is bounded, and given a more strict con- straint of uncertainty, can exactly almost surely converge to the optimal point. Numerical results show that the stochastic ADD algorithms converge in two orders of magnitude less iteration than the stochastic gradient descent algorithms.
出处
《重庆邮电大学学报(自然科学版)》
CSCD
北大核心
2014年第6期838-844,共7页
Journal of Chongqing University of Posts and Telecommunications(Natural Science Edition)