Journal of the Operations Research Society of China ›› 2025, Vol. 13 ›› Issue (4): 873-899.doi: 10.1007/s40305-024-00542-3

    Next Articles

Analyze Accelerated Mirror Descent via High-Resolution ODEs

Ya-Xiang Yuan1, Yi Zhang1,2   

  1. 1 Institute of Computational Mathematics and Scientific/Engineering Computing, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China;
    2 University of Chinese Academy of Sciences, Beijing 100049, China
  • Received:2023-08-10 Revised:2024-03-10 Online:2025-12-30 Published:2025-12-19
  • Contact: Yi Zhang E-mail:zhangyi2020@lsec.cc.ac.cn
  • Supported by:
    This work was partially supported by the National Natural Science Foundation of China (No. 12288201).

Abstract: Mirror descent, which can be seen as generalization of gradient descent for solving constrained optimization problem, has found a variety applications in many fields. As growing demand of solving high-dimensional constrained optimization problem, accelerated form of mirror descent has been proposed, along with its corresponding low-resolution ordinary differential equations (ODEs) framework has been studied. However, The low-resolution ODEs are unable to distinguish between Polyak’s heavy-ball method and Nesterov’s accelerated gradient method. This problem also arises with the low-resolution ODEs for accelerated mirror descent. To address this issue, we derive the high-resolution ODEs for accelerated mirror descent and propose a general Lyapunov function framework to analyze its convergence rates in both continuous time and discrete time. Furthermore, we demonstrate that the accelerated mirror descent can minimize the squared gradient norm at an inverse cubic rate by using the high-resolution ODEs framework. In the end, we extend the high-resolution ODEs framework for the accelerated mirror descent method to analyze the accelerated higher-order mirror descent and obtain finer convergence results.

Key words: Mirror descent, Ordinary differential equations, Lyapunov function, Gradient minimization

CLC Number: