Everything Old is New Again: The Return of Gradient Based Optimization Methods

14.03.2018 16:15 - 17:15

Marc Teboulle (Tel-Aviv University)

The gradient method, forged by Cauchy about 170 years ago, has regained over the last decade a strong revived interest in modern optimization through many of its variants and relatives known as First Order Methods (FOM). This renewed interest in  FOM has emerged  from  the current high demand in solving  optimization problems arising in a wide spectrum of modern applications, e.g., in signal processing, image sciences, machine learning and physics.

These problems are often ill-posed, nonsmooth, convex or nonconvex, and typically very large or even huge scale. This rules out the option of using sophisticated algorithms, such as Newton types schemes (e.g., involving  inversion of matrices) which often become prohibitively expensive.  Elementary first order methods using function values and gradient/subgradient information then often remain our best alternative to tackle such large scale optimization problems.  In turn, this rich collection of applied problems is providing fresh perspectives for optimization algorithms leading to new fundamental research with challenging theoretical and computational questions in the field.

We discuss some recent research in this domain  where gradient based optimization  algorithms have been developed and applied successfully in various areas.  In particular, we highlight the ways in which mathematical structures/data information can be beneficially exploited to design and analyze simple convex and nonconvex optimization methods. The talk is intended to a wide audience. We will assume (almost) no prior knowledge in continuous optimization.

Organiser:
Radu Bot
Location:

Sky Lounge, 12. OG, OMP 1