Loading...

Table of Content

    30 December 2015, Volume 3 Issue 4
    Preface: Special Issue on Data-Driven Optimization Models and Algorithms
    Yan-Qin Bai · Yu-Hong Dai · Nai-Hua Xiu
    2015, 3(4):  389.  doi:10.1007/s40305-015-0111-1
    Asbtract ( 273 )   PDF  
    Related Articles | Metrics
    PPA-Like Contraction Methods for Convex Optimization: A Framework Using Variational Inequality Approach
    Bing-Sheng He
    2015, 3(4):  391.  doi:10.1007/s40305-015-0108-9
    Asbtract ( 338 )   PDF  
    Related Articles | Metrics

    Linearly constrained convex optimization has many applications. The firstorder optimal condition of the linearly constrained convex optimization is a monotone variational inequality (VI). For solving VI, the proximal point algorithm (PPA) in Euclidean-norm is classical but abstract. Hence, the classical PPAonly plays an important theoretical role and it is rarely used in the practical scientific computation. In this paper, we give a review on the recently developed customized PPA in H-norm (H is a positive definite matrix). In the frame of customized PPA, it is easy to construct the contraction-type methods for convex optimization with different linear constraints. In each iteration of the proposed methods, we need only to solve the proximal subproblems which have the closed form solutions or can be efficiently solved up to a high precision. Some novel applications and numerical experiments are reported. Additionally, the original primal-dual hybrid gradient method is modified to a convergent algorithm by using a prediction-correction uniform framework. Using the variational inequality approach, the contractive convergence and convergence rate proofs of the
    framework are more general and quite simple.

    On Solutions of Sparsity Constrained Optimization
    Li-Li Pan · Nai-Hua Xiu1· Sheng-Long Zhou
    2015, 3(4):  421.  doi:10.1007/s40305-015-0101-3
    Asbtract ( 465 )   PDF  
    Related Articles | Metrics

    In this paper, we mainly study the existence of solutions to sparsity constrained optimization (SCO). Based on the expressions of tangent cone and normal cone of sparsity constraint, we present and characterize two first-order necessary optimality conditions for SCO: N-stationarity and T-stationarity. Then we give the second-order necessary and sufficient optimality conditions for SCO. At last, we extend these results to SCO with nonnegative constraint.

    Entropy Function-Based Algorithms for Solving a Class of Nonconvex Minimization Problems
    Yu-Fan Li· Zheng-Hai Huang · Min Zhang
    2015, 3(4):  441.  doi:10.1007/s40305-015-0103-1
    Asbtract ( 287 )   PDF  
    Related Articles | Metrics

    Recently, the l p minimization problem (p ∈ (0, 1)) for sparse signal recovery has been studied a lot because of its efficiency. In this paper, we propose a general smoothing algorithmic framework based on the entropy function for solving a class of l p minimization problems, which includes the well-known unconstrained l2–l p problem as a special case.We show that any accumulation point of the sequence generated by the proposed algorithm is a stationary point of the l p minimization problem, and derive a lower bound for the nonzero entries of the stationary point of the smoothing problem. We implement a specific version of the proposed algorithm which indicates that the entropy function-based algorithm is effective.

     

    A Note on the Complexity of Proximal Iterative Hard Thresholding Algorithm
    Xue Zhang · Xiao-Qun Zhang
    2015, 3(4):  459.  doi:10.1007/s40305-015-0105-z
    Asbtract ( 424 )   PDF  
    Related Articles | Metrics

    The iterative hard thresholding (IHT) algorithm is a powerful and efficient algorithm for solving l0-regularized problems and inspired many applications in sparse approximation and image-processing fields. Recently, some convergence results are established for the proximal scheme of IHT, namely proximal iterative hard thresholding (PIHT) algorithm (Blumensath and Davies, in J Fourier Anal Appl 14:629–654, 2008; Hu et al., Methods 67:294–303, 2015; Lu, Math Program 147:125– 154, 2014; Trzasko et al., IEEE/SP 14th Workshop on Statistical Signal Processing, 2007) on solving the related l0-optimization problems. However, the complexity analysis for the PIHT algorithm is not well explored. In this paper, we aim to provide some complexity estimations for the PIHT sequences. In particular, we show that the complexity of the sequential iterate error is at o(1/k). Under the assumption that the objective function is composed of a quadratic convex function and l0 regularization, we show that the PIHT algorithm has R-linear convergence rate. Finally, we illustrate some applications of this algorithm for compressive sensing reconstruction and sparse learning and validate the estimated error bounds.

    A Sequential Regression Model for Big Data with Attributive Explanatory Variables
    Qing-Ting Zhang · Yuan Liu· Wen Zhou ·Zhou-Wang Yang
    2015, 3(4):  475.  doi:10.1007/s40305-015-0109-8
    Asbtract ( 295 )   PDF  
    Related Articles | Metrics

    As the applications for modeling of big data and analysis advance in scope, computational efficiency faces greater challenges in terms of storage and speed. In many practical problems, a great amount of historical data is sequentially collected and used for online statistical modeling. For modeling sequential data, we propose a sequential linear regression method that extracts essential information from historical data. This carefully selected information is then utilized to update a model according to a sequential estimation scheme. With this technique, the earlier data no longer needs to be stored, and the sequential updating is computationally efficient in speed and storage. A weighted strategy is introduced on the current model to determine the impact of data from different periods. When compared with estimation methods that use historical data, our numerical experiments demonstrate that our solution increases the speed while decreasing the storage load.

    A Linear Mixed Integer Programming Model for N-Vehicle Exploration Problem
    Li-LiWang· Bing-Ling She · Jun-Feng Liu ·Jin-Chaun Cui
    2015, 3(4):  489.  doi:10.1007/s40305-015-0099-6
    Asbtract ( 295 )   PDF  
    Related Articles | Metrics

    Finding the accurate solution for N-vehicle exploration problem is NPhard in strong sense. In this paper, authors build a linear mixed integer programming model for N-vehicle exploration problem based on its properties. The model is then proved equivalent to the original problem. Given the model, one can apply the already existed methods and algorithms for mixed integer linear programming on N-vehicle exploration problem, which helps to enrich methods for solving N-vehicle exploration problem.

    Nonparallel Support Vector Machine Based on One Optimization Problem for Pattern Recognition
    Ying-Jie Tian · Xu-Chan Ju
    2015, 3(4):  499.  doi:10.1007/s40305-015-0095-x
    Asbtract ( 334 )   PDF  
    Related Articles | Metrics

    In this paper, we present a novel nonparallel support vector machine based on one optimization problem (NSVMOOP) for binary classification. Our NSVMOOP is formulated aiming to separate classes from the largest possible angle between the normal vectors and the decision hyperplanes in the feature space, at the same time implementing the structural risk minimization principle. Different from other nonparallel classifiers, such as the representative twin support vector machine, it constructs two nonparallel hyperplanes simultaneously by solving a single quadratic programming problem, on which a modified sequential minimization optimization algorithm is explored. The NSVMOOP is analyzed theoretically and implemented experimentally. Experimental results on both artificial and publicly available benchmark datasets show its feasibility and effectiveness.

    The First-Order Necessary Conditions for Sparsity Constrained Optimization
    Xue Li· Wen Song
    2015, 3(4):  521.  doi:10.1007/s40305-015-0107-x
    Asbtract ( 349 )   PDF  
    Related Articles | Metrics

    In this paper, we study optimization problems with the sparsity constraints. Firstly we give the expressions of the Mordukhovich (the limiting) normal cone of sparsity constraint and its intersection with a polyhedral set, and then based on these expressions we present the first-order necessary conditions for sparsity constrained optimization.

    Online Learning over a Decentralized Network Through ADMM
    Hao-Feng Xu · Qing Ling · Alejandro Ribeiro
    2015, 3(4):  537.  doi:10.1007/s40305-015-0104-0
    Asbtract ( 314 )   PDF (765KB) ( 284 )  
    Related Articles | Metrics

    In this paper, we focus on the decentralized online learning problem where online data streams are separately collected and collaboratively processed by a network of decentralized agents. Comparing to centralized batch learning, such a framework is faithful to the decentralized and online natures of the data streams. We propose an online decentralized alternating direction method of multipliers that efficiently solves the online learning problem over a decentralized network.We prove its O(√T ) regret bound when the instantaneous local cost functions are convex, and its O(log T ) regret bound when the instantaneous local cost functions are strongly convex, where T is the number of iterations. Both regret bounds are in the same orders as those of centralized online learning. Numerical experiments on decentralized online least squares and classification problems demonstrate effectiveness of the proposed algorithm.

    Convergence Analysis of L-ADMM for Multi-block Linear-Constrained Separable Convex Minimization Problem
    Jun-Kai Feng · Hai-Bin Zhang ·Cao-Zong Cheng· Hui-Min Pei
    2015, 3(4):  563.  doi:10.1007/s40305-015-0084-0
    Asbtract ( 419 )   PDF  
    Related Articles | Metrics

    We focus on the convergence analysis of the extended linearized alternating directionmethod of multipliers(L-ADMM)for solving convex minimization problems with three or more separable blocks in the objective functions. Previous convergence analysis of the L-ADMM needs to reduce the multi-block convex minimization problems to two blocks by grouping the variables. Moreover, there has been no rate of convergence analysis for the L-ADMM. In this paper, we construct a counter example to show the failure of convergence of the extended L-ADMM. We prove the convergence and establish the sublinear convergence rate of the extended L-ADMM under the assumptions that the proximal gradient step sizes are smaller than certain values, and any two coefficient matrices in linear constraints are orthogonal.