Title
Computational Assessment of the Anderson and Nesterov acceleration methods for large scale proximal gradient problems
Date Issued
01 January 2021
Access level
metadata only access
Resource Type
conference paper
Publisher(s)
Institute of Electrical and Electronics Engineers Inc.
Abstract
Proximal gradient (PG) algorithms target optimization problems composed by the sum of two convex functions, i.e. F=f+g, such that ? f is L-Lipschitz continuous and g is possibly nonsmooth. Accelerated PG, which uses past information in order to speed-up PG's original rate of convergence (RoC), are of particular practical interest since they are guaranteed to, at least, achieve Ok-2. While there exist several alternatives, arguably, Nesterov's acceleration is the de-facto method. However in the recent years, the Anderson acceleration, a well-established technique, which has also been recently adapted for PG, has gained a lot of attention due to its simplicity and practical speed-up w.r.t. Nesterov's method for small to medium scale (number of variables) problems. In this paper we mainly focus on carrying out a computational (Python based) assessment between the Anderson and Nesterov acceleration methods for large scale optimization problems. The computational evidence from our practical experiments, which particularly target Convolutional Sparse Representations, agrees with our theoretical analysis: The extra burden (both in memory and computations) associated with the Anderson acceleration imposes a practical limit, thus giving the Nesterov's method a clear edge for large scale problems.
Language
English
OCDE Knowledge area
Otras ingenierías y tecnologías
Scopus EID
2-s2.0-85123277367
ISBN
9781665416689
Source
2021 22nd Symposium on Image, Signal Processing and Artificial Vision, STSIVA 2021 - Conference Proceedings
Sources of information: Directorio de Producción Científica Scopus