Loading...

Table of Content

    30 June 2015, Volume 3 Issue 2
    Preface
    Zai-Wen Wen· Wo-Tao Yin· Xiao-Ming Yuan
    2015, 3(2):  95.  doi:10.1007/s40305-015-0081-3
    Asbtract ( 234 )   PDF  
    Related Articles | Metrics
    Data-Driven Tight Frame for Multi-channel Images and Its Application to Joint Color-Depth Image Reconstruction
    JinWang · Jian-Feng Cai
    2015, 3(2):  99.  doi:10.1007/s40305-015-0074-2
    Asbtract ( 270 )   PDF  
    Related Articles | Metrics

    In image restoration, we usually assume that the underlying image has a good sparse approximation under a certain system. Wavelet tight frame system has been proven to be such an efficient system to sparsely approximate piecewise smooth images. Thus, it has been widely used in many practical image restoration problems. However, images from different scenarios are so diverse that no static wavelet tight frame system can sparsely approximate all of themwell.To overcome this, recently,Cai et. al. (Appl Comput Harmon Anal 37:89–105, 2014) proposed a method that derives a data-driven tight frame adapted to the specific input image, leading to a better sparse approximation. The data-driven tight frame has been applied successfully to image denoising and CT image reconstruction. In this paper, we extend this data-driven tight frame construction method to multi-channel images. We construct a discrete tight frame system for each channel and assume their sparse coefficients have a joint sparsity. The multi-channel data-driven tight frame construction scheme is applied to joint color and depth image reconstruction. Experimental results show that the proposed approach has a better performance than state-of-the-art joint color and depth image reconstruction approaches.

    Distributed and Consensus Optimization for Non-smooth Image Reconstruction
    Xiao-Jing Ye
    2015, 3(2):  117.  doi:10.1007/s40305-015-0073-3
    Asbtract ( 259 )   PDF  
    Related Articles | Metrics

    We present a primal-dual algorithm with consensus constraints that can effectively handle large-scale and complex data in a variety of non-smooth image reconstruction problems. In particular, we focus on the case that the data fidelity term can be decomposed into multiple relatively simple functions and deployed to parallel computing units to cooperatively solve for a consensual solution of the original problem. In this case, the subproblems usually have closed form solution or can be solved efficiently at local computing units, and hence the per-iteration computation complexity is very low. A comprehensive convergence analysis of the algorithm, including convergence rate, is established.

    An Alternating Direction Approximate Newton Algorithm for Ill-Conditioned Inverse Problems with Application to Parallel MRI
    William Hager · Cuong Ngo ·Maryam Yashtini· Hong-Chao Zhang
    2015, 3(2):  139.  doi:10.1007/s40305-015-0078-y
    Asbtract ( 186 )   PDF  
    Related Articles | Metrics

    Analternating direction approximateNewton(ADAN)method is developed for solving inverse problems of the form min{φ(Bu)+(1/2)Au − f 22}, where φ is convex and possibly nonsmooth, and A and B arematrices. Problems of this form arise in image reconstruction where A is the matrix describing the imaging device, f is the measured data, φ is a regularization term, and B is a derivative operator. The proposed algorithm is designed to handle applications where A is a large dense, ill-conditioned matrix. The algorithm is based on the alternating direction method of multipliers (ADMM) and an approximation to Newton’s method in which a term in Newton’s Hessian is replaced by aBarzilai–Borwein (BB) approximation. It is shown that ADAN converges to a solution of the inverse problem. Numerical results are provided using test problems from parallel magnetic resonance imaging. ADAN was faster than a proximal ADMM scheme that does not employ a BB Hessian approximation, while it was more stable and much simpler than the related Bregman operator splitting algorithm with variable stepsize algorithm which also employs a BB-based Hessian approximation.

    A Parallel Line Search Subspace Correction Method for Composite Convex Optimization
    Qian Dong · Xin Liu · Zai-Wen Wen·Ya-Xiang Yuan
    2015, 3(2):  163.  doi:10.1007/s40305-015-0079-x
    Asbtract ( 280 )   PDF  
    Related Articles | Metrics

    In this paper, we investigate a parallel subspace correction framework for composite convex optimization. The variables are first divided into a few blocks based on certain rules. At each iteration, the algorithms solve a suitable subproblem on each block simultaneously, construct a search direction by combining their solutions on all blocks, then identify a new point along this direction using a step size satisfying the Armijo line search condition. They are called PSCLN and PSCLO, respectively, depending on whether there are overlapping regions between two imme-diately adjacent blocks of variables. Their convergence is established under mild assumptions. We compare PSCLN and PSCLO with the parallel version of the fast iterative thresholding algorithm and the fixed-point continuation method using the Barzilai-Borwein step size and the greedy coordinate block descent method for solving the 1-regularized minimization problems. Our numerical results showthatPSCLN and PSCLOcan run fast and return solutions notworse than those from the state-of-theart algorithms on most test problems. It is also observed that the overlapping domain decomposition scheme is helpful when the data of the problem has certain special .structures

    Decentralized and Privacy-Preserving Low-Rank Matrix Completion
    An-Ya Lin · Qing Ling
    2015, 3(2):  189.  doi:10.1007/s40305-015-0080-4
    Asbtract ( 247 )   PDF  
    Related Articles | Metrics

    In this paper, we propose a decentralized algorithm to solve the low-rank matrix completion problem and analyze its privacy-preserving property. Suppose that we want to recover a low-rank matrix D = [D1,D2, · · · ,DL ] from a subset of its entries. In a network composed of L agents, each agent i observes some entries of Di . We factorize the unknown matrix D as the product of a public matrix X which is common to all agents and a private matrix Y = [Y1, Y2, · · · , YL ] of which Yi is held by agent i only. Each agent i updates Yi and its local estimate of X, denoted by X(i ), in an alternating manner. Through exchanging information with neighbors, all the agents move toward a consensus on the estimates X(i ). Once the consensus is (nearly) reached throughout the network, each agent i recovers Di = X(i )Yi , thus D is recovered. In this progress, communication through the network may disclose sensitive information about the data matrices Di to a malicious agent. We prove that in the proposed algorithm, D-LMaFit, if the network topology is well designed, the malicious agent is unable to reconstruct the sensitive information from others.

    Nonconvex Sorted l1 Minimization for Sparse Approximation
    Xiao-Lin Huang · Lei Shi · Ming Yan
    2015, 3(2):  207.  doi:10.1007/s40305-014-0069-4
    Asbtract ( 301 )   PDF  
    Related Articles | Metrics

    The  l1 norm is the tight convex relaxation for the 0 norm and has been successfully applied for recovering sparse signals. However, for problems with fewer samples than required for accurate l1  recovery, one needs to apply nonconvex penalties such as p norm. As one method for solving p minimization problems, iteratively reweighted l1  minimization updates the weight for each component based on the value of the same component at the previous iteration. It assigns large weights on small components in magnitude and small weights on large components in magnitude. The set of the weights is not fixed, and it makes the analysis of this method difficult. In this paper, we consider a weighted 1 penalty with the set of the weights fixed, and the weights are assigned based on the sort of all the components in magnitude. The smallest weight is assigned to the largest component in magnitude. This new penalty is called nonconvex sorted l1 . Then we propose two methods for solving nonconvex sorted 1 l1 minimization problems: iteratively reweighted l1  minimization and iterative sorted thresholding, and prove that both methods will converge to a local minimizer of the nonconvex sorted l1  minimization problems.We also show that both methods are generalizations of iterative support detection and iterative hard thresholding, respectively. The numerical experiments demonstrate the better performance of assigning weights by sort compared to assigning by value.

    Sparse and Low-Rank Covariance Matrix Estimation
    Sheng-Long Zhou · Nai-Hua Xiu · Zi-Yan Luo · Ling-Chen Kong
    2015, 3(2):  231.  doi:10.1007/s40305-014-0058-7
    Asbtract ( 243 )   PDF  
    Related Articles | Metrics

    This paper aims at achieving a simultaneously sparse and low-rank estimator from the semidefinite population covariance matrices. We first benefit from a convex optimization which develops ‘1-norm penalty to encourage the sparsity and nuclear norm to favor the low-rank property. For the proposed estimator, we then prove that with high probability, the Frobenius norm of the estimation rate can be of order O(s log p)/n  under a mild case, where s and p denote the number of nonzero entries and the dimension of the population covariance, respectively and n notes the sample capacity. Finally, an efficient alternating direction method of multipliers with global convergence is proposed to tackle this problem, and merits of the approach are also illustrated by practicing numerical simulations.