Journal of the Operations Research Society of China ›› 2024, Vol. 12 ›› Issue (3): 549-571.doi: 10.1007/s40305-023-00492-2

    Next Articles

An Accelerated Stochastic Mirror Descent Method

Bo-Ou Jiang1,2, Ya-Xiang Yuan1   

  1. 1 LSEC, ICMSEC, AMSS, Chinese Academy of Sciences, Beijing 100190, China;
    2 Department of Mathematics, University of Chinese Academy of Sciences, Beijing 100049, China
  • Received:2023-03-03 Revised:2023-04-05 Online:2024-09-30 Published:2024-08-15
  • Contact: Bo-Ou Jiang, Ya-Xiang Yuan E-mail:jiangboou@lsec.cc.ac.cn;yyx@lsec.cc.ac.cn
  • Supported by:
    This work was partially supported by the National Natural Science Foundation of China (No.1228201).

Abstract: Driven by large-scale optimization problems arising from machine learning, the development of stochastic optimization methods has witnessed a huge growth. Numerous types of methods have been developed based on vanilla stochastic gradient descent method. However, for most algorithms, convergence rate in stochastic setting cannot simply match that in deterministic setting. Better understanding the gap between deterministic and stochastic optimization is the main goal of this paper. Specifically, we are interested in Nesterov acceleration of gradient-based approaches. In our study, we focus on acceleration of stochastic mirror descent method with implicit regularization property. Assuming that the problem objective is smooth and convex or strongly convex, our analysis prescribes the method parameters which ensure fast convergence of the estimation error and satisfied numerical performance.

Key words: Large-scale optimization, Variance reduction, Mirror descent, Acceleration, Independent sampling, Importance sampling

CLC Number: