Acceleration of First-Order Methods in Optimization (Dr. Samir Adly, Universite de Limoges)

Date and Time

Location

SSC 3317

Details

This presentation, aimed at a broad spectrum of researchers, highlights the latest advancements in the acceleration of first-order optimization algorithms, a field that currently attracts the attention of many research teams worldwide. First-order methods, such as gradient descent or stochastic gradient, have gained significant popularity. The seminal development in this area can be attributed to Yurii Nesterov, who proposed a class of accelerated gradient methods in 1983 that demonstrated faster global convergence rates than gradient descent. Another notable mention is the FISTA algorithm, introduced by Beck and Teboulle in 2009, which has enjoyed widespread acceptance in machine learning and signal and image processing communities.
 
From another angle, gradient-based optimization algorithms can be analyzed through the lens of Ordinary Differential Equations (ODEs). This perspective allows us to propose new algorithms by discretizing these ODEs and enhances their performance through acceleration techniques, all while maintaining the low computational complexity required for massive data analysis.
 
We will also explore the Ravine method, which Gelfand and Tsetlin introduced in 1961. What's intriguing is the close relationship between Nesterov's accelerated gradient method and the Ravine method. In fact, one can obtain one from the other by simply reversing the order of extrapolation and gradient operations in their respective definitions. Even more surprising, both methods are grounded in the same equations. As a result, practitioners often investigate the Ravine method, occasionally confusing it with Nesterov's Accelerated Gradient.
 
Throughout the presentation, we will also incorporate historical facts and pose open questions to encourage further exploration in the field.

Events Archive