Ergodic convergence in subgradient optimization
作者:
T. Larsson,
M. Patriksson,
A.B. Strömberg,
期刊:
Optimization Methods and Software
(Taylor Available online 1998)
卷期:
Volume 9,
issue 1-3
页码: 93-120
ISSN:1055-6788
年代: 1998
DOI:10.1080/10556789808805688
出版商: Gordon and Breach Science Publishers
关键词: Nonsmooth Minimization;Conditional Subgradient Optimization;Ergodic Convergence;Finite Identification Results;Lagrange Multipliers;Bounding Procedure
数据来源: Taylor
摘要:
When nonsmooth, convex minimization problems are solved by subgradient optimization methods, the subgradients used will in generalnotaccumulate to subgradients which verify the optimality of a solution obtained in the limit. It is therefore not a straightforward task to monitor the progress of a subgradient method in terms of the approximate fulfillment of optimality conditions. Further, certain supplementary information, such as convergent estimates of Lagrange multipliers and convergent lower bounds on the optimal objective value, is not directly available in subgradient schemes
点击下载:
PDF (865KB)
返 回