Generalization and Exploration via Randomized Value Functions

Ian Osband, Benjamin Van Roy, Zheng Wen
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:2377-2386, 2016.

Abstract

We propose randomized least-squares value iteration (RLSVI) – a new reinforcement learning algorithm designed to explore and generalize efficiently via linearly parameterized value functions. We explain why versions of least-squares value iteration that use Boltzmann or epsilon-greedy exploration can be highly inefficient, and we present computational results that demonstrate dramatic efficiency gains enjoyed by RLSVI. Further, we establish an upper bound on the expected regret of RLSVI that demonstrates near-optimality in a tabula rasa learning context. More broadly, our results suggest that randomized value functions offer a promising approach to tackling a critical challenge in reinforcement learning: synthesizing efficient exploration and effective generalization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-osband16, title = {Generalization and Exploration via Randomized Value Functions}, author = {Osband, Ian and Roy, Benjamin Van and Wen, Zheng}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {2377--2386}, year = {2016}, editor = {Balcan, Maria Florina and Weinberger, Kilian Q.}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/osband16.pdf}, url = {https://proceedings.mlr.press/v48/osband16.html}, abstract = {We propose randomized least-squares value iteration (RLSVI) – a new reinforcement learning algorithm designed to explore and generalize efficiently via linearly parameterized value functions. We explain why versions of least-squares value iteration that use Boltzmann or epsilon-greedy exploration can be highly inefficient, and we present computational results that demonstrate dramatic efficiency gains enjoyed by RLSVI. Further, we establish an upper bound on the expected regret of RLSVI that demonstrates near-optimality in a tabula rasa learning context. More broadly, our results suggest that randomized value functions offer a promising approach to tackling a critical challenge in reinforcement learning: synthesizing efficient exploration and effective generalization.} }
Endnote
%0 Conference Paper %T Generalization and Exploration via Randomized Value Functions %A Ian Osband %A Benjamin Van Roy %A Zheng Wen %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-osband16 %I PMLR %P 2377--2386 %U https://proceedings.mlr.press/v48/osband16.html %V 48 %X We propose randomized least-squares value iteration (RLSVI) – a new reinforcement learning algorithm designed to explore and generalize efficiently via linearly parameterized value functions. We explain why versions of least-squares value iteration that use Boltzmann or epsilon-greedy exploration can be highly inefficient, and we present computational results that demonstrate dramatic efficiency gains enjoyed by RLSVI. Further, we establish an upper bound on the expected regret of RLSVI that demonstrates near-optimality in a tabula rasa learning context. More broadly, our results suggest that randomized value functions offer a promising approach to tackling a critical challenge in reinforcement learning: synthesizing efficient exploration and effective generalization.
RIS
TY - CPAPER TI - Generalization and Exploration via Randomized Value Functions AU - Ian Osband AU - Benjamin Van Roy AU - Zheng Wen BT - Proceedings of The 33rd International Conference on Machine Learning DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-osband16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 48 SP - 2377 EP - 2386 L1 - http://proceedings.mlr.press/v48/osband16.pdf UR - https://proceedings.mlr.press/v48/osband16.html AB - We propose randomized least-squares value iteration (RLSVI) – a new reinforcement learning algorithm designed to explore and generalize efficiently via linearly parameterized value functions. We explain why versions of least-squares value iteration that use Boltzmann or epsilon-greedy exploration can be highly inefficient, and we present computational results that demonstrate dramatic efficiency gains enjoyed by RLSVI. Further, we establish an upper bound on the expected regret of RLSVI that demonstrates near-optimality in a tabula rasa learning context. More broadly, our results suggest that randomized value functions offer a promising approach to tackling a critical challenge in reinforcement learning: synthesizing efficient exploration and effective generalization. ER -
APA
Osband, I., Roy, B.V. & Wen, Z.. (2016). Generalization and Exploration via Randomized Value Functions. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:2377-2386 Available from https://proceedings.mlr.press/v48/osband16.html.

Related Material