Continuous Optimization

    Continuous Optimization

    Default Latest Most Read
    Please wait a minute...
    For Selected: Toggle Thumbnails
    A Path-Following Full Newton-Step Infeasible Interior-Point Algorithm for P∗(κ)-HLCPs Based on a Kernel Function
    Soodabeh Asadi· Hossein Mansouri ·Maryam Zangiabadi
    Journal of the Operations Research Society of China    2016, 4 (1): 77-.   DOI: 10.1007/s40305-015-0113-z
    Abstract394)      PDF       Save

    In this paper, we present a path-following infeasible interior-point method for P∗(κ) horizontal linear complementarity problems (P∗(κ)-HLCPs). The algorithm is based on a simple kernel function for finding the search directions and defining the neighborhood of the central path. The algorithm follows the central path related to some perturbations of the original problem, using the so-called feasibility and centering steps, along with only full such steps. Therefore, it has the advantage that the calculation of the step sizes at each iteration is avoided. The complexity result shows that the full-Newton step infeasible interior-point algorithm based on the simple kernel function enjoys the best-known iteration complexity for P∗(κ)-HLCPs.

    Related Articles | Metrics | Comments0
    A New Infeasible-Interior-Point Algorithm Based on Wide Neighborhoods for Symmetric Cone Programming
    Chang-He Liu,Dan Wu,You-Lin Shang
    Journal of the Operations Research Society of China    2016, 4 (2): 147-.   DOI: DOI10.1007/s40305-016-0118-2
    Abstract284)      PDF(pc) (509KB)(165)       Save

    In this paper, we present an infeasible-interior-point algorithm, based on a new wide neighborhood for symmetric cone programming. We treat the classical Newton direction as the sum of two other directions, and equip them with different step sizes.We prove the complexity bound of the new algorithm for the Nesterov-Todd (NT) direction, and the xs and sx directions. The complexity bounds obtained here are the same as small neighborhood infeasible-interior-point algorithms over symmetric cones.

    Related Articles | Metrics | Comments0
    Cited: Baidu(1)
    An Efficient Inexact Newton-CG Algorithm for the Smallest Enclosing Ball Problem of Large Dimensions
    Ya-Feng Liu, Rui Diao,Feng Ye,Hong-Wei Liu
    Journal of the Operations Research Society of China    2016, 4 (2): 167-.   DOI: DOI10.1007/s40305-015-0097-8
    Abstract481)      PDF       Save

    In this paper, we consider the problem of computing the smallest enclosing ball (SEB) of a set of m balls in Rn ,where the product mn is large. We first approximate the non-differentiable SEB problem by its log-exponential aggregation function and then propose a computationally efficient inexact Newton-CG algorithm for the smoothing approximation problem by exploiting its special (approximate) sparsity structure. The key difference between the proposed inexact Newton-CG algorithm and the classical Newton-CG algorithm is that the gradient and the Hessian-vector product are inexactly computed in the proposed algorithm, which makes it capable of solving the large-scale SEB problem. We give an adaptive criterion of inexactly computing the gradient/Hessian and establish global convergence of the proposed algorithm.We illustrate the efficiency of the proposed algorithm by using the classical Newton-CG algorithm as well as the algorithm from Zhou et al. (Comput Optim Appl 30:147–160, 2005) as benchmarks.

    Related Articles | Metrics | Comments0
    Semidefinite Relaxation for Two Mixed Binary Quadratically Constrained Quadratic Programs: Algorithms and Approximation Bounds
    Zi Xu, Ming-Yi Hong
    Journal of the Operations Research Society of China    2016, 4 (2): 205-.   DOI: DOI10.1007/s40305-015-0082-2
    Abstract282)      PDF       Save

    This paper develops new semidefinite programming (SDP) relaxation techniques for two classes of mixed binary quadratically constrained quadratic programs and analyzes their approximation performance. The first class of problems finds two minimum norm vectors in N-dimensional real or complex Euclidean space, such that M out of 2M concave quadratic constraints are satisfied. By employing a special randomized rounding procedure, we show that the ratio between the norm of the optimal
    solution of this model and its SDP relaxation is upper bounded by 54M2/ π in the real case and by 2√4M/π in the complex case. The second class of problems finds a series of minimum norm vectors subject to a set of quadratic constraints and cardinality constraints with both binary and continuous variables. We show that in this case the approximation ratio is also bounded and independent of problem dimension for both the real and the complex cases.

    Related Articles | Metrics | Comments0
    Cited: Baidu(2)
    Alternating Direction Method of Multipliers for 11-12-Regularized Logistic Regression Model
    Yan-Qin Bai| Kai-Ji Shen
    Journal of the Operations Research Society of China    2016, 4 (2): 243-.   DOI: DOI10.1007/s40305-015-0090-2
    Abstract311)      PDF       Save

    Logistic regression has been proved as a promising method for machine learning, which focuses on the problem of classification. In this paper, we present an  11-12-regularized logistic regression model, where the 11-norm is responsible for yielding a sparse logistic regression classifier and the  12-norm for keeping better classification accuracy. To solve the 1112-regularized logistic regression model, we develop an alternating direction method of multipliers with embedding limited-Broyden-Fletcher Goldfarb-Shanno (L-BFGS) method. Furthermore, we implement our model for binary classification problems by using real data examples selected from the University of California, Irvine Machines Learning Repository (UCI Repository).We compare our numerical results with those obtained by the well-known LIBSVM and SVM-Light software. The numerical results showthat our 11-12-regularized logistic regression model achieves better classification and less CPU Time.

    Related Articles | Metrics | Comments0
    A Variant of the Dual Face Algorithm Using Gauss-Jordan Elimination for Linear Programming
    Ping-Qi Pan
    Journal of the Operations Research Society of China    2016, 4 (3): 347-.   DOI: 10.1007/s40305-015-0106-y
    Abstract9452)      PDF       Save

    Using Cholesky factorization, the dual face algorithm was described for solving standard Linear programming (LP) problems, as it would not be very suitable for sparse computations. To dodge this drawback, this paper presents a variant using Gauss-Jordan elimination for solving bounded-variable LP problems.

    Related Articles | Metrics | Comments0
    Two-Phase-SQP Method with Higher-Order Convergence Property
    Suvra Kanti Chakraborty, Geetanjali Panda
    Journal of the Operations Research Society of China    2016, 4 (3): 385-.   DOI: 10.1007/s40305-016-0122-6
    Abstract9710)      PDF       Save

    We propose a two-phase-SQP (Sequential Quadratic Programming) algorithm for equality-constrained optimization problem. In this paper, an iteration process is developed, and at each iteration, two quadratic sub-problems are solved. It is proved that, under some suitable assumptions and without computing further higher-order derivatives, this iteration process achieves higher-order local convergence property in comparison to Newton-SQP scheme. Theoretical advantage and a note on l1 merit function associated to the method are provided.

    Related Articles | Metrics | Comments0
    Cited: Baidu(2)
    Inexact Proximal Point Methods for Quasiconvex Minimization on Hadamard Manifolds
    Nancy Baygorrea· Erik Alex Papa Quiroz ·Nelson Maculan
    Journal of the Operations Research Society of China    2016, 4 (4): 397-.   DOI: 10.1007/s40305-016-0133-3
    Abstract429)      PDF       Save

    In this paper we present two inexact proximal point algorithms to solve
    minimization problems for quasiconvex objective functions on Hadamard manifolds.
    We prove that under natural assumptions the sequence generated by the algorithms
    are well defined and converge to critical points of the problem. We also present an
    application of the method to demand theory in economy.

    Related Articles | Metrics | Comments0
    Cited: Baidu(4)
    Alternating Direction Method of Multipliers for Linear Programming
    Bing-Sheng He· Xiao-Ming Yuan
    Journal of the Operations Research Society of China    2016, 4 (4): 425-.   DOI: 10.1007/s40305-016-0136-0
    Abstract9546)      PDF       Save

    Linear programming is the core problem of various operational research
    problems. The dominant approaches for linear programming are simplex and interior
    point methods. In this paper, we showthat the alternating direction method of multipliers
    (ADMM), which was proposed long time ago while recently found more and more
    applications in a broad spectrum of areas, can also be easily used to solve the canonical
    linear programming model. The resulting per-iteration complexity is O(mn) where m
    is the constraint number and n the variable dimension. At each iteration, there are m
    subproblems that are eligible for parallel computation; each requiring only O(n) flops.
    There is no inner iteration as well.We thus introduce the newADMMapproach to linear
    programming, which may inspire deeper research for more complicated scenarios
    with more sophisticated results.

    Related Articles | Metrics | Comments0
    An Exact l1 Penalty Approach for Interval-ValuedProgramming Problem
    Anurag Jayswal · Jonaki Banerjee
    Journal of the Operations Research Society of China    2016, 4 (4): 461-.   DOI: 10.1007/s40305-016-0120-8
    Abstract7606)      PDF       Save

    The objective of this paper is to propose an exact l1 penalty method for
    constrained interval-valued programming problems which transform the constrained
    problem into an unconstrained interval-valued penalized optimization problem. Under
    some suitable conditions, we establish the equivalence between an optimal solution of
    interval-valued primal and penalized optimization problem. Moreover, saddle-point
    type optimality conditions are also established in order to find the relation between an
    optimal solution of penalized optimization problem and saddle-point of Lagrangian
    function. Numerical examples are given to illustrate the derived results.

    Related Articles | Metrics | Comments0
    A Cutting Hyperplane Projection Method for Solving Generalized Quasi-Variational Inequalities
    Ming-Lu Ye
    Journal of the Operations Research Society of China    2016, 4 (4): 483-.   DOI: 10.1007/s40305-016-0123-5
    Abstract579)      PDF       Save
    Related Articles | Metrics | Comments0
    A Hybrid Second-Order Method for Homogenous Polynomial Optimization over Unit Sphere
    Yi-Ju Wang· Guang-Lu Zhou
    Journal of the Operations Research Society of China    2017, 5 (1): 99-.   DOI: 10.1007/s40305-016-0148-9
    Abstract455)      PDF       Save
    In this paper, we propose a hybrid second-order method for homogenous polynomial optimization over the unit sphere in which the new iterate is generated by employing the second-order information of the objective function. To guarantee the convergence, we recall the shifted power method when the second-order method does not make an improvement to the objective function. As the Hessian of the objective function can easily be computed and no line search is involved in the second-order iterative step, the method is not time-consuming. Further, the new iterate is generated in a relatively larger region and thus the global maximum can be likely obtained. The given numerical experiments show the efficiency of the proposed method.
    Related Articles | Metrics | Comments0
    First-Order Algorithms for Convex Optimization with Nonseparable Objective and Coupled Constraints
    Xiang Gao· Shu-Zhong Zhang
    Journal of the Operations Research Society of China    2017, 5 (2): 131-.   DOI: 10.1007/s40305-016-0131-5
    Abstract9497)      PDF       Save

    In this paper, we consider a block-structured convex optimization model,
    where in the objective the block variables are nonseparable and they are further linearly
    coupled in the constraint. For the 2-block case, we propose a number of first-order
    algorithms to solve this model. First, the alternating direction method of multipliers
    (ADMM) is extended, assuming that it is easy to optimize the augmented Lagrangian
    function with one block of variables at each time while fixing the other block. We
    prove that O(1/t) iteration complexity bound holds under suitable conditions, where
    t is the number of iterations. If the subroutines of the ADMM cannot be implemented,
    then we propose new alternative algorithms to be called alternating proximal gradient
    method of multipliers, alternating gradient projection method of multipliers, and the
    hybrids thereof. Under suitable conditions, the O(1/t) iteration complexity bound is
    shown to hold for all the newly proposed algorithms. Finally, we extend the analysis
    for the ADMM to the general multi-block case.

    Related Articles | Metrics | Comments0
    Cited: Baidu(24)
    A Primal-Dual Interior-Point Method for Optimal Grasping Manipulation of Multi-fingered Hand-Arm Robots
    Yan-Qin Bai · Xue-Rui Gao · Chang-Jun Yu
    Journal of the Operations Research Society of China    2017, 5 (2): 177-.   DOI: 10.1007/s40305-017-0161-7
    Abstract422)      PDF       Save

    In this paper, we consider an optimization problem of the grasping manipulation
    of multi-fingered hand-arm robots. We first formulate an optimization model
    for the problem, based on the dynamic equations of the object and the friction constraints.
    Then, we reformulate the model as a convex quadratic programming over
    circular cones. Moreover, we propose a primal-dual interior-point algorithm based on
    the kernel function to solve this convex quadratic programming over circular cones.
    We derive both the convergence of the algorithm and the iteration bounds for largeand
    small-update methods, respectively. Finally, we carry out the numerical tests of
    180◦ and 90◦ manipulations of the hand-arm robot to demonstrate the effectiveness
    of the proposed algorithm.

    Related Articles | Metrics | Comments0
    Optimization Methods for Box-Constrained Nonlinear Programming Problems Based on Linear Transformation and Lagrange Interpolating Polynomials
    Zhi-You Wu · Fu-Sheng Bai · Jing Tian
    Journal of the Operations Research Society of China    2017, 5 (2): 193-.   DOI: 10.1007/s40305-017-0157-3
    Abstract517)      PDF       Save

    In this paper, an optimality condition for nonlinear programming problems
    with box constraints is given by using linear transformation and Lagrange interpolating
    polynomials. Based on this condition, two new local optimization methods are
    developed. The solution points obtained by the new local optimization methods can
    improve the Karush–Kuhn–Tucker (KKT) points in general. Two global optimization
    methods then are proposed by combining the two new local optimization methods
    with a filled function method. Some numerical examples are reported to show the
    effectiveness of the proposed methods.

    Related Articles | Metrics | Comments0
    Cited: Baidu(1)
    A Homotopy Alternating Direction Method of Multipliers for Linearly Constrained Separable Convex Optimization
    Jiao Yang · Yi-Qing Dai · Zheng Peng ·Jie-Peng Zhuang · Wen-Xing Zhu
    Journal of the Operations Research Society of China    2017, 5 (2): 271-.   DOI: 10.1007/s40305-017-0170-6
    Abstract9764)      PDF       Save
    Linearly constrained separable convex minimization problems have been raised widely in many real-world applications. In this paper, we propose a homotopybased alternating direction method of multipliers for solving this kind of problems.The proposed method owns some advantages of the classical proximal alternating direction method of multipliers and homotopy method. Under some suitable conditions, we prove global convergence and the worst-case O/(1/k)convergence rate in a nonergodic sense. Preliminary numerical results indicate effectiveness and efficiency of the proposed method compared with some state-of-the-art methods.
    Related Articles | Metrics | Comments0
    On the Convergence Rate of an Inexact Proximal Point Algorithm for Quasiconvex Minimization on Hadamard Manifolds
    Nancy Baygorrea, Erik Alex Papa Quiroz, Nelson Maculan
    Journal of the Operations Research Society of China    2017, 5 (4): 457-467.   DOI: 10.1007/s40305-016-0129-z
    Abstract431)      PDF       Save

    In this paper, we present an analysis about the rate of convergence of an inexact proximal point algorithm to solve minimization problems for quasiconvex objective functions on Hadamard manifolds.We prove that under natural assumptions the sequence generated by the algorithm converges linearly or superlinearly to a critical point of the problem.

    Related Articles | Metrics | Comments0
    An LQP-Based Two-Step Method for Structured Variational Inequalities
    Hong-Jin He · Kai Wang · Xing-Ju Cai ·De-Ren Han
    Journal of the Operations Research Society of China    2017, 5 (3): 301-317.   DOI: 10.1007/s40305-016-0147-x
    Abstract9656)      PDF       Save

    The logarithmic quadratic proximal (LQP) regularization is a popular and powerful proximal regularization technique for solving monotone variational inequalities with nonnegative constraints. In this paper,we propose an implementable two-step method for solving structured variational inequality problems by combining LQP regularization and projection method. The proposed algorithm consists of two parts.The first step generates a pair of predictors via inexactly solving a system of nonlinear equations. Then, the second step updates the iterate via a simple correction step. We establish the global convergence of the new method under mild assumptions. To improve the numerical performance of our new method, we further present a self-adaptive version and implement it to solve a traffic equilibrium problem. The numerical results further demonstrate the efficiency of the proposed method.

    Related Articles | Metrics | Comments0
    Cited: Baidu(4)
    Polynomial Convergence of Primal-Dual Path-Following Algorithms for Symmetric Cone Programming Based on Wide Neighborhoods and a New Class of Directions
    Chang-He Liu · Yuan-Yuan Huang ·You-Lin Shang
    Journal of the Operations Research Society of China    2017, 5 (3): 333-346.   DOI: 10.1007/s40305-017-0172-4
    Abstract516)      PDF       Save

    This paper presents a class of primal-dual path-following interior-point algorithms for symmetric cone programming (SCP) based on wide neighborhoods and new directions with a parameter θ. When the parameter θ = 1, the direction is exactly the classical Newton direction. When the parameter θ is independent of the rank of the associated Euclidean Jordan algebra, the algorithm terminates in at most O(κr log ε-1)  iterations, which coincides with the best known iteration bound for the classical wide neighborhood algorithms. When the parameterθ =√n/βτ and Nesterov–Todd search direction is used, the algorithm has O (√r log ε−1 )iteration complexity, the best iteration complexity obtained so far by any interior-point method for solving SCP. To our knowledge, this is the first time that a class of interior-point algorithms including the classical wide neighborhood path-following algorithm is proposed and analyzed over symmetric cone.

     

    Related Articles | Metrics | Comments0
    A Modified Proximal Gradient Method for a Family of Nonsmooth Convex Optimization Problems
    Ying-Yi Li · Hai-Bin Zhang· Fei Li
    Journal of the Operations Research Society of China    2017, 5 (3): 391-.   DOI: 10.1007/s40305-017-0155-5
    Abstract9582)      PDF       Save

    In this paper, we propose a modified proximal gradient method for solving a class of nonsmooth convex optimization problems, which arise in many contemporarystatistical and signal processing applications. The proposed method adopts a new scheme to construct the descent direction based on the proximal gradient method. It is proven that the modified proximal gradient method is Q-linearly convergent without the assumption of the strong convexity of the objective function. Some numerical experiments have been conducted to evaluate the proposed method eventually.

    Related Articles | Metrics | Comments0