ENGLISH

SD-RSM: Stochastic Distributed Regularized Splitting Method for Large-Scale Convex Optimization Problems

发布时间:2026年04月21日 14:14 浏览量:

报告题目:SD-RSM: Stochastic Distributed Regularized Splitting Method for Large-Scale Convex Optimization Problems

人:蔡邢菊 教授(南京师范大学)

报告时间:2026424日(星期五)10:3011:30

报告地点:6776永利集团114(小报告厅)      

校内联系人:张立卫 教授         联系方式:84708351-8320


报告摘要:This paper investigates the problems large-scale distributed composite convex optimization, with motivations from a broad range of applications, including multi-agent systems, federated learning, smart grids, wireless sensor networks, compressed sensing, and so on. Stochastic gradient descent (SGD) and its variants are commonly employed to solve such problems. However, existing algorithms often rely on vanishing step sizes, strong convexity assumptions, or entail substantial computational overhead to ensure convergence or obtain favorable complexity. To bridge the gap between theory and practice, we integrate consensus optimization and operator splitting techniques (see Problem Reformulation) to develop a novel stochastic splitting algorithm, termed the stochastic distributed regularized splitting method (S-D-RSM). In practice, S-D-RSM performs parallel updates of proximal mappings and gradient information for only a randomly selected subset of agents at each iteration. By introducing regularization terms, it effectively mitigates consensus discrepancies among distributed nodes. In contrast to conventional stochastic methods, our theoretical analysis establishes that S-D-RSM achieves global convergence without requiring diminishing step sizes or strong convexity assumptions. Furthermore, it achieves an iteration complexity of $\mathcal{O}(1/)$ with respect to both the objective function value and the consensus error. Numerical experiments show that S-D-RSM achieves up to 2--3$imes$ speedup compared to state-of-the-art baselines, while maintaining comparable or better accuracy. These results not only validate the algorithm's theoretical guarantees but also demonstrate its effectiveness in practical tasks such as compressed sensing and empirical risk minimization.


报告人简介:蔡邢菊,南京师范大学教授,博导。主要从事最优化理论与算法、变分不等式、数值优化方向研究工作。主持多项国家基金,获江苏省科技进步奖一等奖一项,教育部科学研究优秀成果奖二等奖一项。担任中国运筹学会理事兼副秘书长、算法软件与应用分会常务理事兼秘书长、数学规划分会常务理事,江苏省运筹学会理事长。


邮编:116024

电话:0411-84708354

地址:大连市甘井子区凌工路2号

Copyright© 中国·6776永利集团(官方网站)品牌公司-Official website 版权所有      辽ICP备05001357号