Journal of the Operations Research Society of China

Previous Articles     Next Articles

PPA-Like Contraction Methods for Convex Optimization: A Framework Using Variational Inequality Approach

  

  • Online:2015-12-30 Published:2015-12-30

Abstract:

Linearly constrained convex optimization has many applications. The firstorder optimal condition of the linearly constrained convex optimization is a monotone variational inequality (VI). For solving VI, the proximal point algorithm (PPA) in Euclidean-norm is classical but abstract. Hence, the classical PPAonly plays an important theoretical role and it is rarely used in the practical scientific computation. In this paper, we give a review on the recently developed customized PPA in H-norm (H is a positive definite matrix). In the frame of customized PPA, it is easy to construct the contraction-type methods for convex optimization with different linear constraints. In each iteration of the proposed methods, we need only to solve the proximal subproblems which have the closed form solutions or can be efficiently solved up to a high precision. Some novel applications and numerical experiments are reported. Additionally, the original primal-dual hybrid gradient method is modified to a convergent algorithm by using a prediction-correction uniform framework. Using the variational inequality approach, the contractive convergence and convergence rate proofs of the
framework are more general and quite simple.

Key words: Linearly constrained convex optimization ·, Variational inequality ·Proximal point algorithm