Loading...

Table of Content

    29 June 2013, Volume 1 Issue 2
    Continuous Optimization
    On the Linear Convergence of a Proximal Gradient Method for a Class of Nonsmooth Convex Minimization Problems
    Hai-Bin Zhang · Jiao-Jiao Jiang · Zhi-Quan Luo
    2013, 1(2):  163. 
    Asbtract ( 8812 )   PDF  
    Related Articles | Metrics

    We consider a class of nonsmooth convex optimization problems where
    the objective function is the composition of a strongly convex differentiable function
    with a linear mapping, regularized by the sum of both 1-norm and 2-norm of the
    optimization variables. This class of problems arise naturally from applications in
    sparse group Lasso, which is a popular technique for variable selection. An effective
    approach to solve such problems is by the Proximal Gradient Method (PGM). In this
    paper we prove a local error bound around the optimal solution set for this problem
    and use it to establish the linear convergence of the PGM method without assuming
    strong convexity of the overall objective function.

    A New Analysis on the Barzilai-Borwein Gradient Method
    Yu-Hong Dai
    2013, 1(2):  187.  doi:DOI10.1007/s40305-013-0007-x
    Asbtract ( 307 )   PDF (3282KB) ( 558 )  
    Related Articles | Metrics

    Due to its simplicity and efficiency, the Barzilai and Borwein (BB) gradient
    method has received various attentions in different fields. This paper presents a
    new analysis of the BB method for two-dimensional strictly convex quadratic functions.
    The analysis begins with the assumption that the gradient norms at the first
    two iterations are fixed. We show that there is a superlinear convergence step in at
    most three consecutive steps. Meanwhile, we provide a better convergence relation
    for the BB method. The influence of the starting point and the condition number to
    the convergence rate is comprehensively addressed.

    Conjugate Decomposition and Its Applications  
    Li-PingWang · Jin-Yun Yuan
    2013, 1(2):  199.  doi:DOI10.1007/s40305-013-0008-9
    Asbtract ( 231 )   PDF (3791KB) ( 139 )  
    Related Articles | Metrics

    The conjugate decomposition (CD), which was given for symmetric and
    positive definite matrices implicitly based on the conjugate gradient method, is generalized
    to every m×n matrix. The conjugate decomposition keeps some SVD properties,
    but loses uniqueness and part of orthogonal projection property. From the computational
    point of view, the conjugate decomposition is much cheaper than the SVD.
    To illustrate the feasibility of the CD, some application examples are given. Finally,
    the application of the conjugate decomposition in frequency estimate is given with
    comparison of the SVD and FFT. The numerical results are promising.

    Discrete Optimization
    Some Upper Bounds Related with Domination Number
    Zhao Gu · Ji-Xiang Meng · Zhao Zhang · Jin E.Wan
    2013, 1(2):  217.  doi:DOI10.1007/s40305-013-0012-0
    Asbtract ( 260 )   PDF (3244KB) ( 101 )  
    Related Articles | Metrics

    For a simple and connected graph G, denote the domination number, the
    diameter, and the radius of G as β(G), D(G), and r(G), respectively. In this paper,
    we solve two conjectures on the upper bounds of β(G) · D(G) and β(G) + r(G),
    which are proposed by the computer system AutoGraphiX. Extremal trees which
    attain the upper bounds are also considered.

    Continuous Optimization
    New Bounds for RIC in Compressed Sensing
    Sheng-Long Zhou · Ling-Chen Kong · Nai-Hua Xiu
    2013, 1(2):  227.  doi:DOI10.1007/s40305-013-0013-z
    Asbtract ( 233 )   PDF (3317KB) ( 410 )  
    Related Articles | Metrics

    This paper gives new bounds for restricted isometry constant (RIC) in compressed
    sensing. Let Φ be an m×n real matrix and k be a positive integer with k  n.
    The main results of this paper show that if the restricted isometry constant of Φ satisfies
    δ8ak < 1 and some other conditions
    , then k-sparse solution can be recovered exactly via l1 minimization in
    the noiseless case. In particular, when a = 1, 1.5, 2 and 3, we have δ2k < 0.5746 and
    δ8k < 1, or δ2.5k < 0.7046 and δ12k < 1, or δ3k < 0.7731 and δ16k < 1 or δ4k < 0.8445
    and δ24k < 1.

    Uniform Parallel-Machine Scheduling with Time Dependent Processing Times
    Juan Zou · Yu-Zhong Zhang · Cui-xia Miao
    2013, 1(2):  239.  doi:DOI10.1007/s40305-013-0014-y
    Asbtract ( 225 )   PDF  
    Related Articles | Metrics

    We consider several uniform parallel-machine scheduling problems in
    which the processing time of a job is a linear increasing function of its starting time.
    The objectives are to minimize the total completion time of all jobs and the total
    load on all machines. We show that the problems are polynomially solvable when
    the increasing rates are identical for all jobs; we propose a fully polynomial-time
    approximation scheme for the standard linear deteriorating function, where the objective
    function is to minimize the total load on all machines. We also consider the
    problem in which the processing time of a job is a simple linear increasing function
    of its starting time and each job has a delivery time. The objective is to find a schedule
    which minimizes the time by which all jobs are delivered, and we propose a fully
    polynomial-time approximation scheme to solve this problem.

    Alternating Direction Method of Multipliers for Sparse Principal Component Analysis
    Shi-qian Ma
    2013, 1(2):  253.  doi:10.1007/s40305-013-0016-9
    Asbtract ( 6102 )   PDF  
    Related Articles | Metrics

    We consider a convex relaxation of sparse principal component analysis
    proposed by d’Aspremont et al. (SIAM Rev. 49:434–448, 2007). This convex relaxation
    is a nonsmooth semidefinite programming problem in which the 1 norm of the
    desired matrix is imposed in either the objective function or the constraint to improve
    the sparsity of the resulting matrix. The sparse principal component is obtained by a
    rank-one decomposition of the resulting sparse matrix. We propose an alternating direction
    method based on a variable-splitting technique and an augmented Lagrangian
    framework for solving this nonsmooth semidefinite programming problem. In contrast
    to the first-order method proposed in d’Aspremont et al. (SIAM Rev. 49:434–
    448, 2007), which solves approximately the dual problem of the original semidefinite
    programming problem, our method deals with the primal problem directly and solves
    it exactly, which guarantees that the resulting matrix is a sparse matrix. A global
    convergence result is established for the proposed method. Numerical results on both
    synthetic problems and the real applications from classification of text data and senate
    voting data are reported to demonstrate the efficacy of our method.