Most cited articles

    Baidu Google scholar CSCD Crossref WebOfScience Sciencedirect
    Published within: In last 1 yearsIn last 2 yearsIn last 3 yearsAll

    Condition: Baidu + All
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Complexities of Some Problems on Multi-agent Scheduling on a Single Machine
    Jin-Jiang Yuan
    Journal of the Operations Research Society of China    2016, 4 (3): 379-.  
    Abstract9594)      PDF       Save

    We study the computational complexities of three problems on multi-agent scheduling on a single machine. Among the three problems, the computational complexities of the first two problems were still open and the last problem was shown to be unary NP-hard in the literature. We show in this paper that the first two problems are unary NP-hard. We also show that the unary NP-hardness proof for the last problem in the literature is invalid, and so, the exact complexity of the problem is still open.

    Related Articles | Metrics | Comments0
    Cited: Baidu(138)
    On the Sublinear Convergence Rate of Multi-block ADMM
    Tian-Yi Lin · Shi-Qian Ma · Shu-Zhong Zhang
    Journal of the Operations Research Society of China    2015, 3 (3): 251-.   DOI: 10.1007/s40305-015-0092-0
    Abstract11453)      PDF       Save
    The alternating direction method of multipliers (ADMM) is widely used in solving structured convex optimization problems. Despite its success in practice, the convergence of the standard ADMM for minimizing the sum of N (N  3) convex functions, whose variables are linked by linear constraints, has remained unclear for a very long time. Recently, Chen et al. (Math Program, doi:10.1007/ s10107-014-0826-5, 2014) provided a counter-example showing that the ADMM for N  3 may fail to converge without further conditions. Since the ADMM for N  3 has been very successful when applied to many problems arising from real practice, it is worth further investigating under what kind of sufficient conditions it can be guaranteed to converge. In this paper, we present such sufficient conditions that can guarantee the sublinear convergence rate for the ADMM for N  3. Specifically, we show that if one of the functions is convex (not necessarily strongly convex) and the other N-1 functions are strongly convex, and the penalty parameter lies in a certain region, the ADMM converges with rate O(1/t) in a certain ergodic sense and o(1/t) in a certain non-ergodic sense, where t denotes the number of iterations. As a by-product, we also provide a simple proof for the O(1/t) convergence rate of two-blockADMMin terms of both objective error and constraint violation, without assuming any condition on the penalty parameter and strong convexity on the functions.
    Related Articles | Metrics | Comments0
    Cited: Baidu(108)
    Chain-to-Chain Competition Under Demand Uncertainty
    Owen Q.Wu ·Hong Chen
    Journal of the Operations Research Society of China    2016, 4 (1): 49-.   DOI: 10.1007/s40305-015-0114-y
    Abstract8688)      PDF       Save

    In this paper, we aim to study the structure choice of supply chains under competitive environment with uncertain demand. We consider two competing supply chains, each of which chooses to either vertically integrate or decentralize with coordinating contracts.We first analyze firms’ strategic behavior under given supply chain structures: two integrated chains (II), two decentralized chains (DD), and a mixed structure with one decentralized chain and one integrated chain. We then compare different supply chain structures and examine the equilibrium structure choice. We find that the equilibrium structure depends on the product characteristics. For substitutable products, DD is the equilibrium supply chain structure choice, whereas for complementary products, II is the equilibrium structure. Furthermore, a high demand uncertainty strengthens these equilibrium choices.

    Related Articles | Metrics | Comments0
    Cited: Baidu(89)
    On the Linear Convergence of a Proximal Gradient Method for a Class of Nonsmooth Convex Minimization Problems
    Hai-Bin Zhang · Jiao-Jiao Jiang · Zhi-Quan Luo
    Journal of the Operations Research Society of China    2013, 1 (2): 163-.  
    Abstract8556)      PDF       Save

    We consider a class of nonsmooth convex optimization problems where
    the objective function is the composition of a strongly convex differentiable function
    with a linear mapping, regularized by the sum of both 1-norm and 2-norm of the
    optimization variables. This class of problems arise naturally from applications in
    sparse group Lasso, which is a popular technique for variable selection. An effective
    approach to solve such problems is by the Proximal Gradient Method (PGM). In this
    paper we prove a local error bound around the optimal solution set for this problem
    and use it to establish the linear convergence of the PGM method without assuming
    strong convexity of the overall objective function.

    Related Articles | Metrics | Comments0
    Cited: Baidu(47)
    Analysis of Sparse Quasi-Newton Updates with Positive Definite Matrix Completion
    Yu-Hong Dai · Nobuo Yamashita
    Journal of the Operations Research Society of China    2014, 2 (1): 39-56.   DOI: 10.1007/s40305-014-0039-x
    Abstract9142)      PDF       Save

    Based on the idea of maximum determinant positive definite matrix
    completion, Yamashita (Math Prog 115(1):1–30, 2008) proposed a new sparse
    quasi-Newton update, called MCQN, for unconstrained optimization problems with
    sparse Hessian structures. In exchange of the relaxation of the secant equation, the
    MCQN update avoids solving difficult subproblems and overcomes the ill-conditioning
    of approximate Hessian matrices. However, local and superlinear convergence
    results were only established for the MCQN update with the DFP method. In
    this paper, we extend the convergence result to the MCQN update with the whole
    Broyden’s convex family. Numerical results are also reported, which suggest some
    efficient ways of choosing the parameter in the MCQN update the Broyden’s family.

    Related Articles | Metrics | Comments0
    Cited: Baidu(38)
    Online Scheduling on Two Uniform Unbounded Parallel-Batch Machines to Minimize Makespan
    Jin-Jiang Yuan, Li-Li Ren, Ji Tian, Ru-Yan Fu
    Journal of the Operations Research Society of China    2019, 7 (2): 303-319.   DOI: 10.1007/s40305-018-0192-8
    Abstract8837)      PDF       Save
    We study an online (over time) scheduling problem on two uniform unbounded parallel-batchmachines with processing speed 1 and v(0 < v ≤1),respectively, to minimize the makespan. We first show that no online algorithm can have a competitive ratio less than 1 + αL, where αL=(√5-1)/2 ≈ 0.618 if 0 < v 1/2, and αL=α2(v) is the positive root of α3+3α2+(3-1/v)α-1/v=0 if 1/2 < v 1. This lower bound is still valid when all jobs have the same processing times. Then, we provide an online algorithm with a competitive ratio at most min{(√5+1)/2, √2/v}. When the jobs have the same processing times, we present the best possible online algorithm with a competitive ratio 1 + αL.
    Reference | Related Articles | Metrics | Comments0
    Cited: Baidu(38)
    A New Analysis on the Barzilai-Borwein Gradient Method
    Yu-Hong Dai
    Journal of the Operations Research Society of China    2013, 1 (2): 187-.   DOI: DOI10.1007/s40305-013-0007-x
    Abstract242)      PDF(pc) (3282KB)(409)       Save

    Due to its simplicity and efficiency, the Barzilai and Borwein (BB) gradient
    method has received various attentions in different fields. This paper presents a
    new analysis of the BB method for two-dimensional strictly convex quadratic functions.
    The analysis begins with the assumption that the gradient norms at the first
    two iterations are fixed. We show that there is a superlinear convergence step in at
    most three consecutive steps. Meanwhile, we provide a better convergence relation
    for the BB method. The influence of the starting point and the condition number to
    the convergence rate is comprehensively addressed.

    Related Articles | Metrics | Comments0
    Cited: Baidu(36)
    Alternating Direction Method of Multipliers for Sparse Principal Component Analysis
    Shi-qian Ma
    Journal of the Operations Research Society of China    2013, 1 (2): 253-.   DOI: 10.1007/s40305-013-0016-9
    Abstract5842)      PDF       Save

    We consider a convex relaxation of sparse principal component analysis
    proposed by d’Aspremont et al. (SIAM Rev. 49:434–448, 2007). This convex relaxation
    is a nonsmooth semidefinite programming problem in which the 1 norm of the
    desired matrix is imposed in either the objective function or the constraint to improve
    the sparsity of the resulting matrix. The sparse principal component is obtained by a
    rank-one decomposition of the resulting sparse matrix. We propose an alternating direction
    method based on a variable-splitting technique and an augmented Lagrangian
    framework for solving this nonsmooth semidefinite programming problem. In contrast
    to the first-order method proposed in d’Aspremont et al. (SIAM Rev. 49:434–
    448, 2007), which solves approximately the dual problem of the original semidefinite
    programming problem, our method deals with the primal problem directly and solves
    it exactly, which guarantees that the resulting matrix is a sparse matrix. A global
    convergence result is established for the proposed method. Numerical results on both
    synthetic problems and the real applications from classification of text data and senate
    voting data are reported to demonstrate the efficacy of our method.

    Related Articles | Metrics | Comments0
    Cited: Baidu(35)
    An Adaptive Infeasible Interior-Point Algorithm for Linear Complementarity Problems
    Hossein Mansouri · Mohammad Pirhaji
    Journal of the Operations Research Society of China    2013, 1 (4): 523-536.   DOI: 10.1007/s40305-013-0031-x
    Abstract312)      PDF       Save

    Interior-Point Methods (IPMs) not only are the most effective methods in
    practice but also have polynomial-time complexity. Many researchers have proposed
    IPMs for Linear Optimization (LO) and achieved plentiful results. In many cases
    these methods were extendable for LO to Linear Complementarity Problems (LCPs)
    successfully. In this paper, motivated by the complexity results for linear optimization
    based on the study of H. Mansouri et al. (Mansouri and Zangiabadi in J. Optim.
    62(2):285–297, 2013), we extend their idea for LO to LCP. The proposed algorithm
    requires two types of full-Newton steps are called, feasibility steps and (ordinary)
    centering steps, respectively. At each iteration both feasibility and optimality are reduced
    exactly at the same rate. In each iteration of the algorithm we use the largest
    possible barrier parameter value θ which lies between the two values 1
    17n and 1
    13n , this
    makes the algorithm faster convergent for problems having a strictly complementarity
    solution.

    Related Articles | Metrics | Comments0
    Cited: Baidu(33)
    Uniform Parallel-Machine Scheduling with Time Dependent Processing Times
    Juan Zou · Yu-Zhong Zhang · Cui-xia Miao
    Journal of the Operations Research Society of China    2013, 1 (2): 239-.   DOI: DOI10.1007/s40305-013-0014-y
    Abstract170)      PDF       Save

    We consider several uniform parallel-machine scheduling problems in
    which the processing time of a job is a linear increasing function of its starting time.
    The objectives are to minimize the total completion time of all jobs and the total
    load on all machines. We show that the problems are polynomially solvable when
    the increasing rates are identical for all jobs; we propose a fully polynomial-time
    approximation scheme for the standard linear deteriorating function, where the objective
    function is to minimize the total load on all machines. We also consider the
    problem in which the processing time of a job is a simple linear increasing function
    of its starting time and each job has a delivery time. The objective is to find a schedule
    which minimizes the time by which all jobs are delivered, and we propose a fully
    polynomial-time approximation scheme to solve this problem.

    Related Articles | Metrics | Comments0
    Cited: Baidu(31)
    New Bounds for RIC in Compressed Sensing
    Sheng-Long Zhou · Ling-Chen Kong · Nai-Hua Xiu
    Journal of the Operations Research Society of China    2013, 1 (2): 227-.   DOI: DOI10.1007/s40305-013-0013-z
    Abstract182)      PDF(pc) (3317KB)(256)       Save

    This paper gives new bounds for restricted isometry constant (RIC) in compressed
    sensing. Let Φ be an m×n real matrix and k be a positive integer with k  n.
    The main results of this paper show that if the restricted isometry constant of Φ satisfies
    δ8ak < 1 and some other conditions
    , then k-sparse solution can be recovered exactly via l1 minimization in
    the noiseless case. In particular, when a = 1, 1.5, 2 and 3, we have δ2k < 0.5746 and
    δ8k < 1, or δ2.5k < 0.7046 and δ12k < 1, or δ3k < 0.7731 and δ16k < 1 or δ4k < 0.8445
    and δ24k < 1.

    Related Articles | Metrics | Comments0
    Cited: Baidu(25)
    First-Order Algorithms for Convex Optimization with Nonseparable Objective and Coupled Constraints
    Xiang Gao· Shu-Zhong Zhang
    Journal of the Operations Research Society of China    2017, 5 (2): 131-.   DOI: 10.1007/s40305-016-0131-5
    Abstract9492)      PDF       Save

    In this paper, we consider a block-structured convex optimization model,
    where in the objective the block variables are nonseparable and they are further linearly
    coupled in the constraint. For the 2-block case, we propose a number of first-order
    algorithms to solve this model. First, the alternating direction method of multipliers
    (ADMM) is extended, assuming that it is easy to optimize the augmented Lagrangian
    function with one block of variables at each time while fixing the other block. We
    prove that O(1/t) iteration complexity bound holds under suitable conditions, where
    t is the number of iterations. If the subroutines of the ADMM cannot be implemented,
    then we propose new alternative algorithms to be called alternating proximal gradient
    method of multipliers, alternating gradient projection method of multipliers, and the
    hybrids thereof. Under suitable conditions, the O(1/t) iteration complexity bound is
    shown to hold for all the newly proposed algorithms. Finally, we extend the analysis
    for the ADMM to the general multi-block case.

    Related Articles | Metrics | Comments0
    Cited: Baidu(24)
    Approximation Algorithms for Discrete Polynomial Optimization
    Si-Mai He · Zhe-Ning Li · Shu-Zhong Zhang
    Journal of the Operations Research Society of China    2013, 1 (1): 3-.  
    Abstract209)      PDF(pc) (1021KB)(314)       Save
    In this paper, we consider approximation algorithms for optimizing a generic multivariate polynomial function in discrete (typically binary) variables. Such models have natural applications in graph theory, neural networks, error-correcting codes, among many others. In particular, we focus on three types of optimization models: (1) maximizing a homogeneous polynomial function in binary variables; (2) maximizing a homogeneous polynomial function in binary variables, mixed with variables under spherical constraints; (3) maximizing an inhomogeneous polynomial function in binary variables. We propose polynomial-time randomized approximation algorithms for such polynomial optimization models, and establish the approximation ratios (or relative approximation ratios whenever appropriate) for the proposed algorithms. Some examples of applications for these models and algorithms are discussed as well.
    Related Articles | Metrics | Comments0
    Cited: Baidu(24)
    A New Restarting Adaptive Trust-Region Method for Unconstrained Optimization
    Morteza Kimiaei, Susan Ghaderi
    Journal of the Operations Research Society of China    2017, 5 (4): 487-507.   DOI: 10.1007/s40305-016-0149-8
    Abstract15521)      PDF       Save

    In this paper, we present a new adaptive trust-region method for solving nonlinear unconstrained optimization problems. More precisely, a trust-region radius based on a nonmonotone technique uses an approximation of Hessian which is adaptively chosen. We produce a suitable trust-region radius; preserve the global convergence under classical assumptions to the first-order critical points; improve the practical performance of the new algorithm compared to other exiting variants.Moreover, the quadratic convergence rate is established under suitable conditions. Computational results on the CUTEst test collection of unconstrained problems are presented to show the effectiveness of the proposed algorithm compared with some exiting methods.

    Related Articles | Metrics | Comments0
    Cited: Baidu(23)
    On Solutions of Sparsity Constrained Optimization
    Li-Li Pan · Nai-Hua Xiu1· Sheng-Long Zhou
    Journal of the Operations Research Society of China    2015, 3 (4): 421-.   DOI: 10.1007/s40305-015-0101-3
    Abstract459)      PDF       Save

    In this paper, we mainly study the existence of solutions to sparsity constrained optimization (SCO). Based on the expressions of tangent cone and normal cone of sparsity constraint, we present and characterize two first-order necessary optimality conditions for SCO: N-stationarity and T-stationarity. Then we give the second-order necessary and sufficient optimality conditions for SCO. At last, we extend these results to SCO with nonnegative constraint.

    Related Articles | Metrics | Comments0
    Cited: Baidu(21)
    Recent Advances in Mathematical Programming with Semi-continuous Variables and Cardinality Constraint
    Xiao-Ling Sun · Xiao-Jin Zheng · Duan Li
    Journal of the Operations Research Society of China    2013, 1 (1): 55-.  
    Abstract224)      PDF(pc) (742KB)(238)       Save
    Mathematical programming problems with semi-continuous variables and cardinality constraint have many applications, including production planning, portfolio selection, compressed sensing and subset selection in regression. This class of problems can be modeled as mixed-integer programs with special structures and are in general NP-hard. In the past few years, based on new reformulations, approximation and relaxation techniques, promising exact and approximate methods have been developed. We survey in this paper these recent developments for this challenging class of mathematical programming problems.
    Related Articles | Metrics | Comments0
    Cited: Baidu(21)
    Optimal Control of Nonlinear Switched Systems: Computational Methods and Applications
    Qun Lin · Ryan Loxton · Kok Lay Teo
    Journal of the Operations Research Society of China    2013, 1 (3): 275-312.   DOI: 10.1007/s40305-013-0021-z
    Abstract233)      PDF(pc) (1123KB)(345)       Save

    A switched system is a dynamic system that operates by switching between
    different subsystems or modes. Such systems exhibit both continuous and discrete
    characteristics—a dual nature that makes designing effective control policies a challenging
    task. The purpose of this paper is to review some of the latest computational
    techniques for generating optimal control laws for switched systems with nonlinear
    dynamics and continuous inequality constraints.We discuss computational strategies
    for optimizing both the times at which a switched system switches from one mode to
    another (the so-called switching times) and the sequence in which a switched system
    operates its various possible modes (the so-called switching sequence). These strategies
    involve novel combinations of the control parameterization method, the timescaling
    transformation, and bilevel programming and binary relaxation techniques.
    We conclude the paper by discussing a number of switched system optimal control
    models arising in practical applications.

    Related Articles | Metrics | Comments0
    Cited: Baidu(20)
    On the Linear Convergence of the Approximate Proximal Splitting Method for Non-smooth Convex Optimization
    Mojtaba Kadkhodaie · Maziar Sanjabi ·Zhi-Quan Luo
    Journal of the Operations Research Society of China    2014, 2 (2): 123-142.   DOI: 10.1007/s40305-014-0047-x
    Abstract460)      PDF       Save

    Consider the problem of minimizing the sum of two convex functions,
    one being smooth and the other non-smooth. In this paper, we introduce a general
    class of approximate proximal splitting (APS) methods for solving such minimization
    problems. Methods in the APS class include many well-known algorithms
    such as the proximal splitting method, the block coordinate descent method (BCD),
    and the approximate gradient projection methods for smooth convex optimization.
    We establish the linear convergence of APS methods under a local error bound
    assumption. Since the latter is known to hold for compressive sensing and sparse
    group LASSO problems, our analysis implies the linear convergence of the BCD
    method for these problems without strong convexity assumption.

    Related Articles | Metrics | Comments0
    Cited: Baidu(18)
    An Alternating Direction Approximate Newton Algorithm for Ill-Conditioned Inverse Problems with Application to Parallel MRI
    William Hager · Cuong Ngo ·Maryam Yashtini· Hong-Chao Zhang
    Journal of the Operations Research Society of China    2015, 3 (2): 139-.   DOI: 10.1007/s40305-015-0078-y
    Abstract186)      PDF       Save

    Analternating direction approximateNewton(ADAN)method is developed for solving inverse problems of the form min{φ(Bu)+(1/2)Au − f 22}, where φ is convex and possibly nonsmooth, and A and B arematrices. Problems of this form arise in image reconstruction where A is the matrix describing the imaging device, f is the measured data, φ is a regularization term, and B is a derivative operator. The proposed algorithm is designed to handle applications where A is a large dense, ill-conditioned matrix. The algorithm is based on the alternating direction method of multipliers (ADMM) and an approximation to Newton’s method in which a term in Newton’s Hessian is replaced by aBarzilai–Borwein (BB) approximation. It is shown that ADAN converges to a solution of the inverse problem. Numerical results are provided using test problems from parallel magnetic resonance imaging. ADAN was faster than a proximal ADMM scheme that does not employ a BB Hessian approximation, while it was more stable and much simpler than the related Bregman operator splitting algorithm with variable stepsize algorithm which also employs a BB-based Hessian approximation.

    Related Articles | Metrics | Comments0
    Cited: Baidu(17)
    PPA-Like Contraction Methods for Convex Optimization: A Framework Using Variational Inequality Approach
    Bing-Sheng He
    Journal of the Operations Research Society of China    2015, 3 (4): 391-.   DOI: 10.1007/s40305-015-0108-9
    Abstract332)      PDF       Save

    Linearly constrained convex optimization has many applications. The firstorder optimal condition of the linearly constrained convex optimization is a monotone variational inequality (VI). For solving VI, the proximal point algorithm (PPA) in Euclidean-norm is classical but abstract. Hence, the classical PPAonly plays an important theoretical role and it is rarely used in the practical scientific computation. In this paper, we give a review on the recently developed customized PPA in H-norm (H is a positive definite matrix). In the frame of customized PPA, it is easy to construct the contraction-type methods for convex optimization with different linear constraints. In each iteration of the proposed methods, we need only to solve the proximal subproblems which have the closed form solutions or can be efficiently solved up to a high precision. Some novel applications and numerical experiments are reported. Additionally, the original primal-dual hybrid gradient method is modified to a convergent algorithm by using a prediction-correction uniform framework. Using the variational inequality approach, the contractive convergence and convergence rate proofs of the
    framework are more general and quite simple.

    Related Articles | Metrics | Comments0
    Cited: Baidu(17)