Mirror Frameworks for Relatively Lipschitz and Monotone-Like Variational Inequalities

Expand
  • 1 Department of Mathematics, National University of Defense Technology, Changsha 410073, Hunan, China;
    2 School of Mathematical Sciences, Chinese Academy of Sciences, Beijing 100049, China

Received date: 2022-03-22

  Online published: 2025-03-20

Abstract

Nonconvex-nonconcave saddle-point optimization in machine learning has triggered lots of research for studying non-monotone variational inequalities (VIs). In this work, we introduce two mirror frameworks, called mirror extragradient method and mirror extrapolation method, for approximating solutions to relatively Lipschitz and monotone-like VIs. The former covers the well-known Nemirovski’s mirror prox method and Nesterov’s dual extrapolation method, and the recently proposed Bregman extragradient method; all of them can be reformulated into a scheme that is very similar to the original form of extragradient method. The latter includes the operator extrapolation method and the Bregman extrapolation method as its special cases. The proposed mirror frameworks allow us to present a unified and improved convergence analysis for all these existing methods under relative Lipschitzness and monotone-like conditions that may be the currently weakest assumptions guaranteeing (sub) linear convergence.

Cite this article

Hui Zhang, Yu-Hong Dai . Mirror Frameworks for Relatively Lipschitz and Monotone-Like Variational Inequalities[J]. Journal of the Operations Research Society of China, 2025 , 13(1) : 83 -113 . DOI: 10.1007/s40305-023-00458-4

References

[1] Facchinei, F., Pang, J.S.:Finite-dimensional variational inequalities and complementarity problems. Springer, Heidelberg (2003)
[2] Gidel, G., Berard, H., Vignoud, G., Vincent, P., Lacoste-Julien, S.:A variational inequality perspective on generative adversarial networks. International Conference on Learning Representations (2018). https://doi.org/10.48550/arXiv.1802.10551
[3] Korpelevich,G.M.:An extragradientmethodfor finding saddle points andfor other problems.Matecon 12, 747-756(1976)
[4] Nemirovski, A.S.:Prox-method with rate of convergence O (1/t) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM J. Optim. 15, 229-251(2004)
[5] Nesterov, Y.:Dual extrapolation and its applications to solving variational inequalities and related problems. Math. Program. 109, 319-344(2007)
[6] Juditsky, A., Kwon, J., Moulines, E.:Unifying mirror descent and dual averaging. Math. Program. 2022, 1-38(2022)
[7] Popov, L.D.:A modification of the arrow-Hurwicz method for search of saddle points. Math. Notes 28, 845-848(1980)
[8] Tseng, P.:A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38, 431-446(2000)
[9] Censor, Y., Gibali, A., Reich, S.:The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 148, 318-335(2011)
[10] Malitsky, Y.V.:Projected reflected gradient methods for monotone variational inequalities. SIAM J. Optim. 25, 502-520(2015)
[11] Malitsky, Y.V., Tam, M.K.:A forward-backward splitting method for monotone inclusions without cocoercivity. SIAM J. Optim. 30, 1451-1472(2020)
[12] Dang, C.D., Lan, G.H.:On the convergence properties of non-Euclidean extragradient methods for variational inequalities with generalized monotone operators. Comput. Optim. Appl. 60, 277-310(2015)
[13] Zhou, Z., Mertikopoulos, P., Bambos, N., Boyd, S. P., Glynn, P. W.:Stochastic mirror descent in variationally coherent optimization problems. International Conference on Neural Information Processing Systems, pp.7043-7052(2017)
[14] Liu, M., Rafique, H., Lin, Q., Yang, T.:First-order convergence theory for weakly-convex-weaklyconcave min-max problems. J. Mach. Learn. Res. 22, 1-34(2021)
[15] Diakonikolas, J., Daskalakis, C., Jordan, M. I.:Efficient methods for structured nonconvexnonconcave min-max optimization (2020), arXiv:2011.00364
[16] Kotsalis, G., Lan, G.H., Tian, L.:Simple and optimal methods for stochastic variational inequalities, I:operator extrapolation. SIAM J. Optim. 32, 2041-2073(2022)
[17] Lee, S., Kim, D.:Fast extra gradient methods for smooth structured nonconvex-nonconcave minimax problems (2021), arXiv:2106.02326
[18] Grimmer, B., Lu, H., Worah, P., Mirrokni, V.:The landscape of the proximal point method for nonconvex-nonconcave minimax optimization (2020), arXiv:2006.08667
[19] Bauschke, H.H., Bolte, J., Teboulle, M.:A descent lemma beyond Lipschitz gradient continuity: first-order methods revisited and applications. Math. Oper. Res. 42, 330-348(2016)
[20] Lu, H., Freund, R.M., Nesterov, Y.:Relatively smooth convex optimization by first-order methods, and applications. SIAM J. Optim. 28, 333-354(2018)
[21] Cohen, M. B., Sidford, A., Tian, K., Relative Lipschitzness in extragradient methods and a direct recipe for acceleration (2020), arXiv:2011.06572
[22] Zhang, H.:Extragradient and extrapolation methods with generalized Bregman distances for saddle point problems. Oper. Res. Lett. 50, 329-334(2022)
[23] Hiriart-Urruty, J.-B., Lemarechal, C.:Foundations of convex analysis. Springer (2004)
[24] Rockafellar, R.T.:Convex analysis. Princeton University Press (2015)
[25] Bregman, L.M.:The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Math. Phys. 7, 200-217(1967)
[26] Bauschke, H.H., Borwein, J.M.:Legendre functions and the method of random Bregman projections. J. Convex Anal. 4, 27-67(1997)
[27] Kiwiel, K.C.:Free-steering relaxation methods for problems with strictly convex costs and linear constraints. Math. Oper. Res. 22, 326-349(1997)
[28] Reem, D., Reich, S., Pierro, A.R.D.:Re-examination of Bregman functions and new properties of their divergences. Optimization 68, 279-348(2019)
[29] Kiwiel, K.C.:Proximal minimization methods with generalized Bregman functions. SIAM J. Control Optim. 35, 1142-1168(1997)
[30] Beck, A.:First-order methods in optimization. Soc. Ind. Appl. Math.(2017)
[31] Sherman, J.:Area-convexity, $\ell$ regularization, and undirected multicommodity flow. Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, pp. 452-460(2017)
[32] Bauschke, H.H., Moursi, W.M., Wang, X.:Generalized monotone operators and their averaged resolvents. Math. Program. 189, 55-74(2021)
[33] Zhang, H., Yin, W.:Gradient methods for convex minimization:better rates under weaker conditions. CAM Report 13-17, UCLA (2013)
[34] Necoara, I., Nesterov, Y.E., Glineur, F.:Linear convergence of first order methods for non-strongly convex optimization. Math. Program. 175, 69-107(2019)
[35] Lai, M.-J., Yin, W.:Augmented $\ell$1 and nuclear-norm models with a globally linearly convergent algorithm. SIAM J. Imaging Sci. 6, 1059-1091(2013)
[36] Zhang, H.:The restricted strong convexity revisited:analysis of equivalence to error bound and quadratic growth. Optim. Lett. 11, 817-833(2017)
[37] Nemirovsky, A.S., Yudin, D.B.:Problem complexity and method efficiency in optimization. Wiley, Chichester (1983)
[38] Beck, A., Teboulle, M.:Mirror descent and nonlinear projected subgradient methods for convex optimization. Oper. Res. Lett. 31, 167-175(2003)
[39] Bubeck, S.:Convex optimization:algorithms and complexity. Found. Trends Mach. Learn. 8, 231-357(2014)
[40] Maddison, C.J., Paulin, D., Teh, Y.W., Doucet, A.:Dual space preconditioning for gradient descent. SIAM J. Optim. 31, 991-1016(2021)
[41] Hsieh, Y.-G., Iutzeler, F., Malick, J., Mertikopoulos, P.:On the convergence of single-call stochastic extra-gradient methods. Adv. Neural Inf. Process. Syst.(2019)
[42] Wei, C.-Y., Lee, C.-W., Zhang, M., Luo, H.:Linear last-iterate convergence in constrained saddlepoint optimization. The Ninth International Conference on Learning Representations (2021). https://doi.org/10.48550/arXiv.2006.09517
[43] Cen, S., Wei, Y., Chi, Y.J.:Fast policy extragradient methods for competitive games with entropy regularization (2021), arXiv:2105.15186v1
[44] Azizian, W., Iutzeler, F., Malick, J., Mertikopoulos, P.:The last-iterate convergence rate of optimistic mirror descent in stochastic variational inequalities. The 34th Annual Conference on Learning Theory (2021)
[45] Mertikopoulos, P., Sandholm, W.H.:Learning in games via reinforcement and regularization. Math. Oper. Res. 41, 1297-1324(2016)
Options
Outlines

/