Sharper Bounds for Proximal Gradient Algorithms with Errors

Published in SIAM Journal on Optimization, 2024

This article investigates the convergence behavior of proximal gradient algorithms for convex composite optimization when there are computational errors both in the gradient evaluations and in the proximal operator. It generalizes deterministic convergence analyses to the quasi-Fejér setting, derives tighter bounds (both deterministic and probabilistic), and quantifies how approximate computing, early termination, and inexact proximal mappings impact performance. Applications include Model Predictive Control (MPC) with sparse controls and LASSO solved under reduced precision and approximate proximal operators. Under certain statistical assumptions it also shows that

Recommended citation: Hamadouche, A., Wu, Y., Wallace, A. M., & Mota, J. F. C. (2024). “Sharper Bounds for Proximal Gradient Algorithms with Errors.” *SIAM Journal on Optimization*, 34(1), 278-305.
Download Paper | Download Slides | Download Bibtex