Data-Efficient Off-Policy Policy Evaluation for Reinforcement Learning

Philip Thomas, Emma Brunskill
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:2139-2148, 2016.

Abstract

In this paper we present a new way of predicting the performance of a reinforcement learning policy given historical data that may have been generated by a different policy. The ability to evaluate a policy from historical data is important for applications where the deployment of a bad policy can be dangerous or costly. We show empirically that our algorithm produces estimates that often have orders of magnitude lower mean squared error than existing methods—it makes more efficient use of the available data. Our new estimator is based on two advances: an extension of the doubly robust estimator (Jiang & Li, 2015), and a new way to mix between model based and importance sampling based estimates.

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-thomasa16, title = {Data-Efficient Off-Policy Policy Evaluation for Reinforcement Learning}, author = {Thomas, Philip and Brunskill, Emma}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {2139--2148}, year = {2016}, editor = {Balcan, Maria Florina and Weinberger, Kilian Q.}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/thomasa16.pdf}, url = {https://proceedings.mlr.press/v48/thomasa16.html}, abstract = {In this paper we present a new way of predicting the performance of a reinforcement learning policy given historical data that may have been generated by a different policy. The ability to evaluate a policy from historical data is important for applications where the deployment of a bad policy can be dangerous or costly. We show empirically that our algorithm produces estimates that often have orders of magnitude lower mean squared error than existing methods—it makes more efficient use of the available data. Our new estimator is based on two advances: an extension of the doubly robust estimator (Jiang & Li, 2015), and a new way to mix between model based and importance sampling based estimates.} }
Endnote
%0 Conference Paper %T Data-Efficient Off-Policy Policy Evaluation for Reinforcement Learning %A Philip Thomas %A Emma Brunskill %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-thomasa16 %I PMLR %P 2139--2148 %U https://proceedings.mlr.press/v48/thomasa16.html %V 48 %X In this paper we present a new way of predicting the performance of a reinforcement learning policy given historical data that may have been generated by a different policy. The ability to evaluate a policy from historical data is important for applications where the deployment of a bad policy can be dangerous or costly. We show empirically that our algorithm produces estimates that often have orders of magnitude lower mean squared error than existing methods—it makes more efficient use of the available data. Our new estimator is based on two advances: an extension of the doubly robust estimator (Jiang & Li, 2015), and a new way to mix between model based and importance sampling based estimates.
RIS
TY - CPAPER TI - Data-Efficient Off-Policy Policy Evaluation for Reinforcement Learning AU - Philip Thomas AU - Emma Brunskill BT - Proceedings of The 33rd International Conference on Machine Learning DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-thomasa16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 48 SP - 2139 EP - 2148 L1 - http://proceedings.mlr.press/v48/thomasa16.pdf UR - https://proceedings.mlr.press/v48/thomasa16.html AB - In this paper we present a new way of predicting the performance of a reinforcement learning policy given historical data that may have been generated by a different policy. The ability to evaluate a policy from historical data is important for applications where the deployment of a bad policy can be dangerous or costly. We show empirically that our algorithm produces estimates that often have orders of magnitude lower mean squared error than existing methods—it makes more efficient use of the available data. Our new estimator is based on two advances: an extension of the doubly robust estimator (Jiang & Li, 2015), and a new way to mix between model based and importance sampling based estimates. ER -
APA
Thomas, P. & Brunskill, E.. (2016). Data-Efficient Off-Policy Policy Evaluation for Reinforcement Learning. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:2139-2148 Available from https://proceedings.mlr.press/v48/thomasa16.html.

Related Material