Journal of the Operations Research Society of China ›› 2025, Vol. 13 ›› Issue (1): 313-325.doi: 10.1007/s40305-023-00468-2

Previous Articles    

A Note on R-Linear Convergence of Nonmonotone Gradient Methods

Xin-Rui Li, Ya-Kui Huang   

  1. Institute of Mathematics, Hebei University of Technology, Tianjin 300401, China
  • Received:2022-02-28 Revised:2022-09-17 Online:2025-03-30 Published:2025-03-20
  • Contact: Ya-Kui Huang,Xin-Rui Li E-mail:hyk@hebut.edu.cn;xinruili1020@163.com

Abstract: Nonmonotone gradient methods generally perform better than their monotone counterparts especially on unconstrained quadratic optimization. However, the known convergence rate of the monotone method is often much better than its nonmonotone variant. With the aim of shrinking the gap between theory and practice of nonmonotone gradient methods, we introduce a property for convergence analysis of a large collection of gradient methods. We prove that any gradient method using stepsizes satisfying the property will converge R-linearly at a rate of 1-λ1/M1, where λ1 is the smallest eigenvalue of Hessian matrix and M1 is the upper bound of the inverse stepsize. Our results indicate that the existing convergence rates of many nonmonotone methods can be improved to 1-1/κ with κ being the associated condition number.

Key words: Gradient methods, R-linear convergence, Nonmonotone, Quadratic optimization

CLC Number: