Ayuda
Ir al contenido

Dialnet


Linear convergence of primal–dual gradient methods and their performance in distributed optimization

  • Autores: Sulaiman A. Alghunaim, A. H. Sayed
  • Localización: Automatica: A journal of IFAC the International Federation of Automatic Control, ISSN 0005-1098, Nº. 117, 2020
  • Idioma: inglés
  • Texto completo no disponible (Saber más ...)
  • Resumen
    • In this work, we revisit a classical incremental implementation of the primal-descent dual-ascent gradient method used for the solution of equality constrained optimization problems. We provide a short proof that establishes the linear (exponential) convergence of the algorithm for smooth strongly-convex cost functions and study its relation to the non-incremental implementation. We also study the effect of the augmented Lagrangian penalty term on the performance of distributed optimization algorithms for the minimization of aggregate cost functions over multi-agent networks.


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus

Opciones de compartir

Opciones de entorno