Contextual Bandit Learning with Predictable Rewards

Alekh Agarwal, Miroslav Dudik, Satyen Kale, John Langford, Robert Schapire
Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, PMLR 22:19-26, 2012.

Abstract

Contextual bandit learning is a reinforcement learning problem where the learner repeatedly receives a set of features (context), takes an action and receives a reward based on the action and context. We consider this problem under a realizability assumption: there exists a function in a (known) function class, always capable of predicting the expected reward, given the action and context. Under this assumption, we show three things. We present a new algorithm–Regressor Elimination – with a regret similar to the agnostic setting (i.e. in the absence of realizability assumption). We prove a new lower bound showing no algorithm can achieve superior performance in the worst case even with the realizability assumption. However, we do show that for \emphany set of policies (mapping contexts to actions), there is a distribution over rewards (given context) such that our new algorithm has \em constant regret unlike the previous approaches.

Cite this Paper


BibTeX
@InProceedings{pmlr-v22-agarwal12, title = {Contextual Bandit Learning with Predictable Rewards}, author = {Agarwal, Alekh and Dudik, Miroslav and Kale, Satyen and Langford, John and Schapire, Robert}, booktitle = {Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics}, pages = {19--26}, year = {2012}, editor = {Lawrence, Neil D. and Girolami, Mark}, volume = {22}, series = {Proceedings of Machine Learning Research}, address = {La Palma, Canary Islands}, month = {21--23 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v22/agarwal12/agarwal12.pdf}, url = {https://proceedings.mlr.press/v22/agarwal12.html}, abstract = {Contextual bandit learning is a reinforcement learning problem where the learner repeatedly receives a set of features (context), takes an action and receives a reward based on the action and context. We consider this problem under a realizability assumption: there exists a function in a (known) function class, always capable of predicting the expected reward, given the action and context. Under this assumption, we show three things. We present a new algorithm–Regressor Elimination – with a regret similar to the agnostic setting (i.e. in the absence of realizability assumption). We prove a new lower bound showing no algorithm can achieve superior performance in the worst case even with the realizability assumption. However, we do show that for \emphany set of policies (mapping contexts to actions), there is a distribution over rewards (given context) such that our new algorithm has \em constant regret unlike the previous approaches.} }
Endnote
%0 Conference Paper %T Contextual Bandit Learning with Predictable Rewards %A Alekh Agarwal %A Miroslav Dudik %A Satyen Kale %A John Langford %A Robert Schapire %B Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2012 %E Neil D. Lawrence %E Mark Girolami %F pmlr-v22-agarwal12 %I PMLR %P 19--26 %U https://proceedings.mlr.press/v22/agarwal12.html %V 22 %X Contextual bandit learning is a reinforcement learning problem where the learner repeatedly receives a set of features (context), takes an action and receives a reward based on the action and context. We consider this problem under a realizability assumption: there exists a function in a (known) function class, always capable of predicting the expected reward, given the action and context. Under this assumption, we show three things. We present a new algorithm–Regressor Elimination – with a regret similar to the agnostic setting (i.e. in the absence of realizability assumption). We prove a new lower bound showing no algorithm can achieve superior performance in the worst case even with the realizability assumption. However, we do show that for \emphany set of policies (mapping contexts to actions), there is a distribution over rewards (given context) such that our new algorithm has \em constant regret unlike the previous approaches.
RIS
TY - CPAPER TI - Contextual Bandit Learning with Predictable Rewards AU - Alekh Agarwal AU - Miroslav Dudik AU - Satyen Kale AU - John Langford AU - Robert Schapire BT - Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics DA - 2012/03/21 ED - Neil D. Lawrence ED - Mark Girolami ID - pmlr-v22-agarwal12 PB - PMLR DP - Proceedings of Machine Learning Research VL - 22 SP - 19 EP - 26 L1 - http://proceedings.mlr.press/v22/agarwal12/agarwal12.pdf UR - https://proceedings.mlr.press/v22/agarwal12.html AB - Contextual bandit learning is a reinforcement learning problem where the learner repeatedly receives a set of features (context), takes an action and receives a reward based on the action and context. We consider this problem under a realizability assumption: there exists a function in a (known) function class, always capable of predicting the expected reward, given the action and context. Under this assumption, we show three things. We present a new algorithm–Regressor Elimination – with a regret similar to the agnostic setting (i.e. in the absence of realizability assumption). We prove a new lower bound showing no algorithm can achieve superior performance in the worst case even with the realizability assumption. However, we do show that for \emphany set of policies (mapping contexts to actions), there is a distribution over rewards (given context) such that our new algorithm has \em constant regret unlike the previous approaches. ER -
APA
Agarwal, A., Dudik, M., Kale, S., Langford, J. & Schapire, R.. (2012). Contextual Bandit Learning with Predictable Rewards. Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 22:19-26 Available from https://proceedings.mlr.press/v22/agarwal12.html.

Related Material