Non-smooth optimization

    Non-smooth optimization

    Default Latest Most Read
    Please wait a minute...
    For Selected: Toggle Thumbnails
    DC Programming Algorithm for Clusterwise Linear L1 Regression
    Adil M. Bagirov · Sona Taheri
    Journal of the Operations Research Society of China    2017, 5 (2): 233-.   DOI: 10.1007/s40305-017-0151-9
    Abstract573)      PDF       Save

    The aim of this paper is to develop an algorithm for solving the clusterwise
    linear least absolute deviations regression problem. This problem is formulated as a
    nonsmooth nonconvex optimization problem, and the objective function is represented
    as a difference of convex functions. Optimality conditions are derived by using this
    representation. An algorithm is designed based on the difference of convex representation
    and an incremental approach. The proposed algorithm is tested using small to
    large artificial and real-world data sets.

    Related Articles | Metrics | Comments0

    Optimality Conditions of Approximate Solutions for Nonsmooth Semi-infinite Programming Problems

    Xian-Jun Long, Yi-Bin Xiao, Nan-Jing Huang
    Journal of the Operations Research Society of China    2018, 6 (2): 289-299.   DOI: 10.1007/s40305-017-0167-1
    Abstract119)      PDF       Save

    In this paper, we study optimality conditions of approximate solutions for nonsmooth semi-infinite programming problems. Three new classes of functions, namely -pseudoconvex functions of type I and type II and -quasiconvex functions are introduced, respectively. By utilizing these new concepts, sufficient optimality conditions of approximate solutions for the nonsmooth semi-infinite programming problem are established. Some examples are also presented. The results obtained in this paper improve the corresponding results of Son et al. (J Optim Theory Appl 141:389–409, 2009).

    Related Articles | Metrics | Comments0
    On the Convergence of Asynchronous Parallel Iteration with Unbounded Delays
    Zhimin Peng, Yangyang Xu, Ming Yan, Wotao Yin
    Journal of the Operations Research Society of China    2019, 7 (1): 5-42.   DOI: 10.1007/s40305-017-0183-1
    Abstract388)      PDF       Save

    Recent years have witnessed the surge of asynchronous parallel (asyncparallel) iterative algorithms due to problems involving very large-scale data and a large number of decision variables. Because of asynchrony, the iterates are computed with outdated information, and the age of the outdated information, which we call delay, is the number of times it has been updated since its creation. Almost all recent works prove convergence under the assumption of a finite maximum delay and set their stepsize parameters accordingly. However, the maximum delay is practically unknown. This paper presents convergence analysis of an async-parallel method from a probabilistic viewpoint, and it allows for large unbounded delays. An explicit formula of stepsize that guarantees convergence is given depending on delays' statistics. With p + 1 identical processors, we empirically measured that delays closely follow the Poisson distribution with parameter p, matching our theoretical model, and thus, the stepsize can be set accordingly. Simulations on both convex and nonconvex optimization problems demonstrate the validness of our analysis and also show that the existing maximum-delay-induced stepsize is too conservative, often slows down the convergence of the algorithm.

    Reference | Related Articles | Metrics | Comments0
    Cited: Baidu(2)