Adaptive Algorithms for Online Convex Optimization with Long-term Constraints

Rodolphe Jenatton, Jim Huang, Cedric Archambeau
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:402-411, 2016.

Abstract

We present an adaptive online gradient descent algorithm to solve online convex optimization problems with long-term constraints, which are constraints that need to be satisfied when accumulated over a finite number of rounds T, but can be violated in intermediate rounds. For some user-defined trade-off parameter βin (0, 1), the proposed algorithm achieves cumulative regret bounds of O(T^maxβ,1_β) and O(T^1_β/2), respectively for the loss and the constraint violations. Our results hold for convex losses, can handle arbitrary convex constraints and rely on a single computationally efficient algorithm. Our contributions improve over the best known cumulative regret bounds of Mahdavi et al. (2012), which are respectively O(T^1/2) and O(T^3/4) for general convex domains, and respectively O(T^2/3) and O(T^2/3) when the domain is further restricted to be a polyhedral set. We supplement the analysis with experiments validating the performance of our algorithm in practice.

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-jenatton16, title = {Adaptive Algorithms for Online Convex Optimization with Long-term Constraints}, author = {Jenatton, Rodolphe and Huang, Jim and Archambeau, Cedric}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {402--411}, year = {2016}, editor = {Balcan, Maria Florina and Weinberger, Kilian Q.}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/jenatton16.pdf}, url = {https://proceedings.mlr.press/v48/jenatton16.html}, abstract = {We present an adaptive online gradient descent algorithm to solve online convex optimization problems with long-term constraints, which are constraints that need to be satisfied when accumulated over a finite number of rounds T, but can be violated in intermediate rounds. For some user-defined trade-off parameter βin (0, 1), the proposed algorithm achieves cumulative regret bounds of O(T^maxβ,1_β) and O(T^1_β/2), respectively for the loss and the constraint violations. Our results hold for convex losses, can handle arbitrary convex constraints and rely on a single computationally efficient algorithm. Our contributions improve over the best known cumulative regret bounds of Mahdavi et al. (2012), which are respectively O(T^1/2) and O(T^3/4) for general convex domains, and respectively O(T^2/3) and O(T^2/3) when the domain is further restricted to be a polyhedral set. We supplement the analysis with experiments validating the performance of our algorithm in practice.} }
Endnote
%0 Conference Paper %T Adaptive Algorithms for Online Convex Optimization with Long-term Constraints %A Rodolphe Jenatton %A Jim Huang %A Cedric Archambeau %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-jenatton16 %I PMLR %P 402--411 %U https://proceedings.mlr.press/v48/jenatton16.html %V 48 %X We present an adaptive online gradient descent algorithm to solve online convex optimization problems with long-term constraints, which are constraints that need to be satisfied when accumulated over a finite number of rounds T, but can be violated in intermediate rounds. For some user-defined trade-off parameter βin (0, 1), the proposed algorithm achieves cumulative regret bounds of O(T^maxβ,1_β) and O(T^1_β/2), respectively for the loss and the constraint violations. Our results hold for convex losses, can handle arbitrary convex constraints and rely on a single computationally efficient algorithm. Our contributions improve over the best known cumulative regret bounds of Mahdavi et al. (2012), which are respectively O(T^1/2) and O(T^3/4) for general convex domains, and respectively O(T^2/3) and O(T^2/3) when the domain is further restricted to be a polyhedral set. We supplement the analysis with experiments validating the performance of our algorithm in practice.
RIS
TY - CPAPER TI - Adaptive Algorithms for Online Convex Optimization with Long-term Constraints AU - Rodolphe Jenatton AU - Jim Huang AU - Cedric Archambeau BT - Proceedings of The 33rd International Conference on Machine Learning DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-jenatton16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 48 SP - 402 EP - 411 L1 - http://proceedings.mlr.press/v48/jenatton16.pdf UR - https://proceedings.mlr.press/v48/jenatton16.html AB - We present an adaptive online gradient descent algorithm to solve online convex optimization problems with long-term constraints, which are constraints that need to be satisfied when accumulated over a finite number of rounds T, but can be violated in intermediate rounds. For some user-defined trade-off parameter βin (0, 1), the proposed algorithm achieves cumulative regret bounds of O(T^maxβ,1_β) and O(T^1_β/2), respectively for the loss and the constraint violations. Our results hold for convex losses, can handle arbitrary convex constraints and rely on a single computationally efficient algorithm. Our contributions improve over the best known cumulative regret bounds of Mahdavi et al. (2012), which are respectively O(T^1/2) and O(T^3/4) for general convex domains, and respectively O(T^2/3) and O(T^2/3) when the domain is further restricted to be a polyhedral set. We supplement the analysis with experiments validating the performance of our algorithm in practice. ER -
APA
Jenatton, R., Huang, J. & Archambeau, C.. (2016). Adaptive Algorithms for Online Convex Optimization with Long-term Constraints. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:402-411 Available from https://proceedings.mlr.press/v48/jenatton16.html.

Related Material