Ayuda
Ir al contenido

Dialnet


Acceleration methods for classic convex optimization Algorithms

  • Autores: Alberto Torres Barrán
  • Directores de la Tesis: José Ramón Dorronsoro Ibero (dir. tes.)
  • Lectura: En la Universidad Autónoma de Madrid ( España ) en 2017
  • Idioma: español
  • Tribunal Calificador de la Tesis: Aníbal Ramón Figueiras Vidal (presid.), Daniel Hernández Lobato (secret.), Amparo Alonso Betanzos (voc.), César Hervás Martínez (voc.), Ana Pilar González Marcos (voc.)
  • Programa de doctorado: Programa de Doctorado en Ingeniería Informática y de Telecomunicación por la Universidad Autónoma de Madrid
  • Materias:
  • Enlaces
  • Resumen
    • Most Machine Learning models are defined in terms of a convex optimization problem. Thus, developing algorithms to quickly solve such problems its of great interest to the field. We focus in this thesis on two of the most widely used models, the Lasso and Support Vector Machines.

      The former belongs to the family of regularization methods, and it was introduced in 1996 to perform both variable selection and regression at the same time. This is accomplished by adding a l1-regularization term to the least squares model, achieving interpretability and also a good generalization error.

      Support Vector Machines were originally formulated to solve a classification problem by finding the maximum-margin hyperplane, that is, the hyperplane which separates two sets of points and its at equal distance from both of them. SVMs were later extended to handle non-separable classes and non-linear classification problems, applying the kernel-trick. A first contribution of this work is to carefully analyze all the existing algorithms to solve both problems, describing not only the theory behind them but also pointing out possible advantages and disadvantages of each one.

      Although the Lasso and SVMs solve very different problems, we show in this thesis that they are both equivalent. Following a recent result by Jaggi, given an instance of one model we can construct an instance of the other having the same solution, and vice versa. This equivalence allows us to translate theoretical and practical results, such as algorithms, from one field to the other, that have been otherwise being developed independently. We will give in this thesis not only the theoretical result but also a practical application, that consists on solving the Lasso problem using the SMO algorithm, the state-of-the-art solver for non-linear SVMs. We also perform experiments comparing SMO to GLMNet, one of the most popular solvers for the Lasso.

      The results obtained show that SMO is competitive with GLMNet, and sometimes even faster.

      Furthermore, motivated by a recent trend where classical optimization methods are being re-discovered in improved forms and successfully applied to many problems, we have also analyzed two classical momentum-based methods: the Heavy Ball algorithm, introduced by Polyak in 1963 and Nesterov’s Accelerated Gradient, discovered by Nesterov in 1983. In this thesis we develop practical versions of Conjugate Gradient, which is essentially equivalent to the Heavy Ball method, and Nesterov’s Acceleration for the SMO algorithm. Experiments comparing the convergence of all the methods are also carried out. The results show that the proposed algorithms can achieve a faster convergence both in terms of iterations and execution time.


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus

Opciones de compartir

Opciones de entorno