Anytime Exploration for Multi-armed Bandits using Confidence Information

Kwang-Sung Jun, Robert Nowak
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:974-982, 2016.

Abstract

We introduce anytime Explore-m, a pure exploration problem for multi-armed bandits (MAB) that requires making a prediction of the top-m arms at every time step. Anytime Explore-m is more practical than fixed budget or fixed confidence formulations of the top-m problem, since many applications involve a finite, but unpredictable, budget. However, the development and analysis of anytime algorithms present many challenges. We propose AT-LUCB (AnyTime Lower and Upper Confidence Bound), the first nontrivial algorithm that provably solves anytime Explore-m. Our analysis shows that the sample complexity of AT-LUCB is competitive to anytime variants of existing algorithms. Moreover, our empirical evaluation on AT-LUCB shows that AT-LUCB performs as well as or better than state-of-the-art baseline methods for anytime Explore-m.

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-jun16, title = {Anytime Exploration for Multi-armed Bandits using Confidence Information}, author = {Jun, Kwang-Sung and Nowak, Robert}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {974--982}, year = {2016}, editor = {Balcan, Maria Florina and Weinberger, Kilian Q.}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/jun16.pdf}, url = {https://proceedings.mlr.press/v48/jun16.html}, abstract = {We introduce anytime Explore-m, a pure exploration problem for multi-armed bandits (MAB) that requires making a prediction of the top-m arms at every time step. Anytime Explore-m is more practical than fixed budget or fixed confidence formulations of the top-m problem, since many applications involve a finite, but unpredictable, budget. However, the development and analysis of anytime algorithms present many challenges. We propose AT-LUCB (AnyTime Lower and Upper Confidence Bound), the first nontrivial algorithm that provably solves anytime Explore-m. Our analysis shows that the sample complexity of AT-LUCB is competitive to anytime variants of existing algorithms. Moreover, our empirical evaluation on AT-LUCB shows that AT-LUCB performs as well as or better than state-of-the-art baseline methods for anytime Explore-m.} }
Endnote
%0 Conference Paper %T Anytime Exploration for Multi-armed Bandits using Confidence Information %A Kwang-Sung Jun %A Robert Nowak %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-jun16 %I PMLR %P 974--982 %U https://proceedings.mlr.press/v48/jun16.html %V 48 %X We introduce anytime Explore-m, a pure exploration problem for multi-armed bandits (MAB) that requires making a prediction of the top-m arms at every time step. Anytime Explore-m is more practical than fixed budget or fixed confidence formulations of the top-m problem, since many applications involve a finite, but unpredictable, budget. However, the development and analysis of anytime algorithms present many challenges. We propose AT-LUCB (AnyTime Lower and Upper Confidence Bound), the first nontrivial algorithm that provably solves anytime Explore-m. Our analysis shows that the sample complexity of AT-LUCB is competitive to anytime variants of existing algorithms. Moreover, our empirical evaluation on AT-LUCB shows that AT-LUCB performs as well as or better than state-of-the-art baseline methods for anytime Explore-m.
RIS
TY - CPAPER TI - Anytime Exploration for Multi-armed Bandits using Confidence Information AU - Kwang-Sung Jun AU - Robert Nowak BT - Proceedings of The 33rd International Conference on Machine Learning DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-jun16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 48 SP - 974 EP - 982 L1 - http://proceedings.mlr.press/v48/jun16.pdf UR - https://proceedings.mlr.press/v48/jun16.html AB - We introduce anytime Explore-m, a pure exploration problem for multi-armed bandits (MAB) that requires making a prediction of the top-m arms at every time step. Anytime Explore-m is more practical than fixed budget or fixed confidence formulations of the top-m problem, since many applications involve a finite, but unpredictable, budget. However, the development and analysis of anytime algorithms present many challenges. We propose AT-LUCB (AnyTime Lower and Upper Confidence Bound), the first nontrivial algorithm that provably solves anytime Explore-m. Our analysis shows that the sample complexity of AT-LUCB is competitive to anytime variants of existing algorithms. Moreover, our empirical evaluation on AT-LUCB shows that AT-LUCB performs as well as or better than state-of-the-art baseline methods for anytime Explore-m. ER -
APA
Jun, K. & Nowak, R.. (2016). Anytime Exploration for Multi-armed Bandits using Confidence Information. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:974-982 Available from https://proceedings.mlr.press/v48/jun16.html.

Related Material