Ayuda
Ir al contenido

Dialnet


Resumen de Linear convergence of primal–dual gradient methods and their performance in distributed optimization

Sulaiman A. Alghunaim, A. H. Sayed

  • In this work, we revisit a classical incremental implementation of the primal-descent dual-ascent gradient method used for the solution of equality constrained optimization problems. We provide a short proof that establishes the linear (exponential) convergence of the algorithm for smooth strongly-convex cost functions and study its relation to the non-incremental implementation. We also study the effect of the augmented Lagrangian penalty term on the performance of distributed optimization algorithms for the minimization of aggregate cost functions over multi-agent networks.


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus