Continuous Optimization
In this paper, we present a new adaptive trust-region method for solving nonlinear unconstrained optimization problems. More precisely, a trust-region radius based on a nonmonotone technique uses an approximation of Hessian which is adaptively chosen. We produce a suitable trust-region radius; preserve the global convergence under classical assumptions to the first-order critical points; improve the practical performance of the new algorithm compared to other exiting variants.Moreover, the quadratic convergence rate is established under suitable conditions. Computational results on the CUTEst test collection of unconstrained problems are presented to show the effectiveness of the proposed algorithm compared with some exiting methods.
We propose a two-phase-SQP (Sequential Quadratic Programming) algorithm for equality-constrained optimization problem. In this paper, an iteration process is developed, and at each iteration, two quadratic sub-problems are solved. It is proved that, under some suitable assumptions and without computing further higher-order derivatives, this iteration process achieves higher-order local convergence property in comparison to Newton-SQP scheme. Theoretical advantage and a note on l1 merit function associated to the method are provided.
In this paper, we propose a modified proximal gradient method for solving a class of nonsmooth convex optimization problems, which arise in many contemporarystatistical and signal processing applications. The proposed method adopts a new scheme to construct the descent direction based on the proximal gradient method. It is proven that the modified proximal gradient method is Q-linearly convergent without the assumption of the strong convexity of the objective function. Some numerical experiments have been conducted to evaluate the proposed method eventually.
Linear programming is the core problem of various operational research problems. The dominant approaches for linear programming are simplex and interior point methods. In this paper, we showthat the alternating direction method of multipliers (ADMM), which was proposed long time ago while recently found more and more applications in a broad spectrum of areas, can also be easily used to solve the canonical linear programming model. The resulting per-iteration complexity is O(mn) where m is the constraint number and n the variable dimension. At each iteration, there are m subproblems that are eligible for parallel computation; each requiring only O(n) flops. There is no inner iteration as well.We thus introduce the newADMMapproach to linear programming, which may inspire deeper research for more complicated scenarios with more sophisticated results.
In this paper, we consider a block-structured convex optimization model, where in the objective the block variables are nonseparable and they are further linearly coupled in the constraint. For the 2-block case, we propose a number of first-order algorithms to solve this model. First, the alternating direction method of multipliers (ADMM) is extended, assuming that it is easy to optimize the augmented Lagrangian function with one block of variables at each time while fixing the other block. We prove that O(1/t) iteration complexity bound holds under suitable conditions, where t is the number of iterations. If the subroutines of the ADMM cannot be implemented, then we propose new alternative algorithms to be called alternating proximal gradient method of multipliers, alternating gradient projection method of multipliers, and the hybrids thereof. Under suitable conditions, the O(1/t) iteration complexity bound is shown to hold for all the newly proposed algorithms. Finally, we extend the analysis for the ADMM to the general multi-block case.
The logarithmic quadratic proximal (LQP) regularization is a popular and powerful proximal regularization technique for solving monotone variational inequalities with nonnegative constraints. In this paper,we propose an implementable two-step method for solving structured variational inequality problems by combining LQP regularization and projection method. The proposed algorithm consists of two parts.The first step generates a pair of predictors via inexactly solving a system of nonlinear equations. Then, the second step updates the iterate via a simple correction step. We establish the global convergence of the new method under mild assumptions. To improve the numerical performance of our new method, we further present a self-adaptive version and implement it to solve a traffic equilibrium problem. The numerical results further demonstrate the efficiency of the proposed method.
Using Cholesky factorization, the dual face algorithm was described for solving standard Linear programming (LP) problems, as it would not be very suitable for sparse computations. To dodge this drawback, this paper presents a variant using Gauss-Jordan elimination for solving bounded-variable LP problems.
The objective of this paper is to propose an exact l1 penalty method for constrained interval-valued programming problems which transform the constrained problem into an unconstrained interval-valued penalized optimization problem. Under some suitable conditions, we establish the equivalence between an optimal solution of interval-valued primal and penalized optimization problem. Moreover, saddle-point type optimality conditions are also established in order to find the relation between an optimal solution of penalized optimization problem and saddle-point of Lagrangian function. Numerical examples are given to illustrate the derived results.
The purpose of this paper is to study the approximate optimality condition for composite convex optimization problems with a cone-convex system in locally convex spaces, where all functions involved are not necessarily lower semicontinuous.By using the properties of the epigraph of conjugate functions, we introduce a new regularity condition and give its equivalent characterizations. Under this new regularity condition, we derive necessary and sufficient optimality conditions of ε-optimal solutions for the composite convex optimization problem. As applications of our results, we derive approximate optimality conditions to cone-convex optimization problems. Our results extend or cover many known results in the literature.